A function </J (x,y) has a stationary point when = = o.
ax ay
2 2 [2]2
Provided ~ is non-zero at a stationary point, where ~ = a ~ a ~ - .iJ!_ ,the
ax ay axay
following conditions on the second derivatives there determine whether it is a maximum, a minimum or a saddle point,
Maximum: ~ > 0,
and
Minimum: ~ > 0,
and
Saddle point: all other cases for which ~ is non-zero.
The case ~ = 0 can be a maximum, a minimum, a saddle point, or none of these.
Total differential theorem
For a function </J (x,y,z , ... )
d</J = a</J dx + a</J dy
ax ay
in which a</J means (a</J 1
ax ax jy,z ...
a</J
+ -dz + ...
az
(i.e. with y, z ... kept constant).
Chain rule
When x,y,z ... are functions of u, v, w ...
5
9. Differential Equations
Integrating factor
A first order o.d.e. of the form
dy + P(x) y = Q(x) dx
can be integrated using the integrating factor exp( f P dx ), so that the equation takes the
form
~ [y expt ] P dx' )] = Q(x) expt ] P dx' ) dx
Particular integrals
For linear differential equations with constant coefficients:
Right-hand side
Trial P.I.
constant
a
a xr + b xr+ + ...
xn (n integer)
aekx
xekx
(a x + b ) ekx
(a xn + b xn-1 + ... ) ekx
sin px}
cos px
a sin px + b cos px
kx· }
e kx SIll px
e cos px
ekx ( a sin px + b cos px )
For the special case when the right hand side has an exponential or trigonometric factor which is also a solution of the differential equation:
Complementary Right-hand side
Function
ekx ekx
ekx xn ekx
sin px } Sinpx}
cos px cospx
ekx sin px} kx }
e kx SIll px
ekx cos px
e cos px 6
Trial P.1.
axekx
(a xn+ 1 + b xn + ... ) ekx
x (a sin px + b cos px)
x ekx ( a sin px + b cos px )
10. Integration
Standard indefinite integrals
Integrand Integral Integrand Integral
smx -cosx sinh x cosh x
cosx smx cosh x sinh x
tan x -In(cos x) tanh x In( cosh x)
x cosech x x
cosec x In( tan -) In( tanh -)
2 2
sec x In( tan x + sec x ) sechx 2 tan-1(ex)
cotx In( sin x) cothx In( sinh x)
sec2x tan x sech2x tanh x
tan x sec x sec x tanh x sech x - sech x
cotx cosec x - cosec x coth x cosech x - cosechx
1 sirr ! [ = J -cos-t J
~a2 _x2 or
1 sinh -I [ = J In (x+ ~x2 +a2)
~x2 + a2 or
1 cosh -I (= J
In (x + ~ x2 _ a2 )
~x2 _a2 or
1 ~ tan-tJ
x2 +a2 Standard substitutions
If the integrand is a function of: (a2-x2) or ~a2_x2
Substitute:
x = a sine
or
x = a cose
or
x = a tane or
x = a sinhf
or
x = a sece or
x = a coshf
1
or of the form: ----,===
(ax + b)~ px+ q
px + q = u2
1
1 ax+b=u
or a rational function of sin x and/or cos x
x t = tan - 2
[ whence
. 2t
smx= --2
l+t
cosx =
7
Integration by parts
f b (dV) b f b (dU)
a U dx dx = [uv] a - a V dx dx
Differentiation of an integral
d f b(x) db da f b(x)af(x, y) dy
- f(x, y)dy = f(x,b) dx - f(x,a) dx +
dx a(x) a(x) ax
Change of variable in surface and volume integration
Surface:
Ifsfix,y) dxdy = Ifs' F(u,v) IJI dudv where u(x,y) and v(x,y) are the new variables
ax ax
_ a(x, y) au av
and where J = is the Jacobian.
a(u,v) ay ay
au av For surface integrals involving vector normals
ar ar
n dA == n dxdy = ± - x- dudv au av
and the sign is chosen to preserve the sense of the normal.
Volume
Iffyf(x,y,z) dxdyd; = Iffy, F(u, V,w) IJI dudvdw
ax ax ax
-
au av aw
where J == a(x,y,z) ay ay ay
=
a(u, V, w) au av aw
az az az
au av aw Note
1 a (u, v, ... )
=---
J a (x, y, ... )
8
11. Vector Products
Scalar Product
a • b = lal Ibl cos e
(where e is the angle between the vectors)
Vector Product
a x b = lall bl sin e n
(where e is the angle between the vectors, and n is a unit vector normal to the plane containing a and b such that a, b , n form a right-handed set)
i j k
a x b = ax ay az = - b x a bx by bz
Scalar Triple Product
ax ay az ax bx Cx
a. (b x c) = bx by bz = ay by cy
Cx cy Cz az bz Cz
= b.(cxa) = c.(axb)
= - a. (c x b) = - c. (b x a) = - b. (a x c) The notation [a, b, c] is also used for a. (b x c).
Vector Triple Product
a x (b x c) = (a • c) b - (a . b) c
(a x b) x c = (a • c) b - (b . c) a
9
12. Matrices and Linear Algebra
(AB •.• N)t = N' B'A'
(AB •.. N)-l = N-1 B-1A-1
det (AB ... N) = det A det B .•. det N
where C, )' denotes the transpose
(if individual inverses exist)
(if individual matrices are square)
If A is square and if A-I exists (i.e. if det A *- 0), then Ax = b has a unique solution x = A-1b
If A is square then Ax = 0 has a non-trivial solution if and only if det A = O.
For an orthogonal matrix
Q-1 = Qt, det Q = ± 1.
Qt is also orthogonal.
If det Q = + 1 then Q describes a rotation without reflection.
Eigenvalues and Eigenvectors
If A is an n x n matrix, its eigenvalues .Ii, and corresponding eigenvectors u satisfy Au = AU.
There are in general n eigenvalues Ai and corresponding eigenvectors u, .
The eigenvalues are the roots of the n'th order polynomial equation
det (A - AI) = 0 where I is the identity matrix.
If A is real and symmetric the eigenvalues are real and the eigenvectors corresponding to different eigenvalues are orthogonal. For repeated eigenvalues, the corresponding eigenvectors can be chosen to be orthogonal. Furthermore,
UtAU = A and A = UAUt
where A is the diagonal matrix whose elements are the eigenvalues of A and U is the orthogonal matrix whose columns are the normalized eigenvectors of A.
Rayleigh's quotient
t
If' .. . f A h x Ax . d .. h
x IS an approximation to an eigenvector 0 t en -- IS a goo approximation to t e
xtx
corresponding eigenvalue.
10
Material relevant to 18 Linear Algebra Rank
The rank, r, of a matrix is the number of independent rows, or columns.
Fundamental Subspaces of an m x n matrix A
The column space is the space spanned by the columns. It has dimension equal to the rank, r, and is a subspace of R'" .
The nullspace is the space spanned by the solutions x of the equation Ax = O. The nullspace has dimension n - r and is a subspace of Rn .
The row space is the space spanned by the rows of A. It has dimension equal to rand is a subspace of Rn .
The left-nullspace is the space spanned by the solutions y of the equation yt A = O. It has dimension m - r, and is a subspace of Rm .
The nullspace is the orthogonal complement of the row space in Rn .
The left-nullspace is the orthogonal complement of the column space in «",
For A x = b to have a solution, b must lie in the column space, i.e. yt b = 0 for any y such that At y = O.
Decompositions of an m X n matrix A LU Decomposition
PA = LU, where P is a permutation matrix, L a lower triangular matrix and U an m X n echelon matrix.
QR Decomposition
A = QR, where the columns of Q are orthonormal, and R is upper-triangular and invertible. When m = n and so all matrices are square, Q is an orthogonal matrix.
Eigenvalue Decomposition (only for m = n)
Provided that A has n linearly independent eigenvectors, A = S A S -1 , where S has the eigenvectors of A as its columns, and A is a diagonal matrix with eigenvalues along the diagonal.
If A is real and symmetric, see under Eigenvalues and Eigenvectors above.
11
Singular Value Decomposition
A = Ql .I: Q~ (orthogonal x diagonal x orthogonal)
• The columns of Ql (m x m) are the eigenvectors of A At
• The columns of Q2 (n x n) are the eigenvectors of At A
• The r singular values, arranged in descending order on the diagonal of .I: (m x n) are the square roots of the non-zero eigenvalues of both A A t and A t A. r is the rank of
the matrix.
Basis of column space:
Basis of left nullspace:
Basis of row space:
Basis of nullspace:
first r columns of Ql last m - r columns of Ql first r columns of Q2
last n - r columns of Q2'
General solution of Ax = b by Gaussian Elimination
1. Transform Ax = b into Ux = c .
2. Set all free variables to zero and find a particular solution Xo .
3. Set the RHS to zero, give each free variable in tum the value 1 while the others are zero, and solve to find a set of vectors which span the nullspace of A. Arrange these vectors as the columns of a matrix X .
4. The general solution is Xo + Xu, where u is arbitrary.
Least squares solution of Ax = busing QR Solve R x = Q tb by back-substitution.
12
13. Vector Calculus
¢ is a scalar function of position and u a vector function.
Cartesian coordinates x, y, Z U = Ux i + uyj + Uz k
grad ¢ == V' ¢ == i a¢ + j a¢ + k a¢
ax ay az
div u == V'.u == aux + auy + auz
ax ay az
j k curl u == V' xu == a/ax a/ay a/az
(the Laplacian operator)
Cylindrical polar coordinates r, e, z ; u = Ur er + Ue ee + Uz ez
(er ee and ez are unit radial, tangential and axial vectors respectively)
z
<;> (r,e,Z) I
zl
div u == V'.u == 1 a (rur) +! aUe + auz
r ar r ae az
y
1 er ree ez
curl u == V'x u == alar alae a/az
r x
ur rUe Uz
div (grad ¢) == V' ~ " ~i_( r a~ ) + 1 a2¢ a2¢
---+ az2
r ar ar r2 ae2 13
Spherical polar coordinates r, e, w : U = urer + Ueee + uljIeljl where 0::::; e::::; n ; 0::::; ljI::::; 2n (e., ee and eljl are unit radial, longitudinal and azimuthal vectors respectively)
grad ¢ == V ¢ == e a¢ + ~ a¢ + ~ a¢
r ar r ae rsine aljl
z
(r,e, ljI)
1 a(r2u,) div U == V.u ==
r2 ar
1 a(sine ue)
+ +
r sin f ae
1 aUljI
----
y
x
curl U == V XU ==
ree alae
rsineeljl a I aljl rsineuljI
Spherical symmetry ¢ = ¢ (r), U = U (r) er (e, is a unit radial vector)
1 d 2 div U == --(r u) r2 dr
curl u == 0
14
Potentials
A vector field u is said to be irrotational if V x u = 0
A vector field u is said to be solenoidal or incompressible if V.u = 0
If V x u = 0, then there exists a scalar potential C/J such that u = \7 C/J (for some applications it is more natural to use u = -VC/J).
If V.u = 0, then there exists a vector potential A such that u = V x A . (A is usually chosen so that V.A = 0)
Identities
V (C/J 1 + C/J 2) = \7 C/J 1 + V' C/J 2
V.(C/Ju) = C/JV.u+(VC/J).u
VX(C/Ju) = C/JVxu+(V'C/J)xu
V.Vxu=O
VxVC/J=O
1
u x (V x u) + (u.V) u = V' [ - u2]
2
Gauss' Theorem (Divergence Theorem)
for a closed surface S enclosing a volume V. The outward normal is taken for dA .
Stokes' Theorem
f f s V x u. dA = ~ c u.d I
for an open surface S with a closed boundary curve C (the 'rim'). The normal to the surface and the sense of the line integral are related by a right hand screw rule.
15
14. Fourier Series
Full range
For -n::;; e ::;; n
fce)
1 00
= -an + L.Can cos ne + bn sin ne) 2
n = 1
1 1!
where an = - f fce) cos nede, n
-1!
1 1!
bn =- f fCe)sinnede n
-1!
Equivalently
n =-00
where
c = n
1!
_1_ f fce) e -in8 de 2n
-1(
= _!_Ca - ib ) for n > 0
2 n n
= _!_Ca_n + ib_n) for n <0
2
1 for n =0
= -an
2 If the function f C e) is periodic, of period 2n, then these relationships are valid for all e. The integrals may then be taken over any range of 2n.
Half range
If a Fourier series representation off C e) is required to be valid only in 0::;; e ::;; n , then it need contain either the sine terms alone or the cosine terms alone. For example
fce)
2 1(
where an = - f fce) cosn e de n 0
16
General Range 0:5 t:5 T
where an
1 ~ ( 2nnt . 2nnt)
!(t) = -aa + L.J an cos--+bn sm--
2 n=l T T
T T
2 f 2n nt 2 f . 2n nt
= - !(t)cos--dt, bn = - !(t) sm--dt
TO T TO T
n=-oo
T
where cn = _!_ f !(t)e-i2nntIT dt
To
00
Equivalently j'(r) = LCnei2nntiT
n =-00
i.e.
00
!(t) = LCn einOJot where
The (scientific) fundamental frequency is wa = 2n and the (scientific) n'th harmonic is nWa.
T
Examples
Some specific complex Fourier series are shown overleaf. Examples of specific real Fourier series can be found in the Electrical Data Book.
15. Fourier Transforms
fOO • dto
y(t) = y(w)ezOJt_
-00 2n
Caution -
(a) Fourier transforms are sometimes written in terms of frequency! = W /2n (b) Some books handle the 2n factor differently and define transforms with
Pulse wave:
--I a I-- [ . ntt a 1
1 ;- f- 00 SIn--
f(t) !!:_ 1 + L T einwot
I T nna
t n=-oo --
I I n*Q T
1 I
0 T 2T 18
Discrete Fourier Transform
The DFT of a sequence (xn, n = 0, 1, '" , N-1 ) is defined by N-J
x , = .L.. xn e-i2nknl N
n=Q
for 0 s,_ k s,_ N-1
with inverse DFT
1 N-J
~ X i2nknl N xn = - £..J k e
N k=Q
for 0 s,_ n s N-1
Caution - Some books handle the _1 factor differently and define transforms with N
differences in signs of the exponent
16. Laplace Transforms
xes) = L (x(t)) = fOO x(t)e-st dt 0-
N.B. All functions in Laplace transform theory are zero for t < O.
Initial Value Theorem:
If the limit as s ---7 + 00 of s xes) is finite, then
x(O+) = lim s xes)
S~+CX>
Final Value Theorem:
Providing x(t) tends to a limit as t ---7 00 then
lim xU) = lim s xes)
t~oo s~Q
19
Table of Laplace Transforms
N.B. All functions in Laplace transform theory are zero for t < O.
Function (for t ~ 0) e -at x(t)
x(t - r) H(t - r)
dx(t) == x'(t) dt
dnx(t) == x(n)(t) dtn
t
f x(r)dr o
t
f x,(r)x2(t-r)dr o
t x(t)
1 == H(t) == u(t) 8(t)
H(t - r)
8 (t- r)
Function (for t ~ 0)
t
-at e
sin to t
e -at sin w t
t sin w t
sinh to t
Transform xes + a)
s xes) - x(O)
Remarks
Shift in s
Shift in t t ~ 0
Differentiation
S2X(S)- sx(O) -x'(O)
,,1 x(s)
d _ --xes) ds
1
S-1 e -sf
Transform
(s + a)-1
t»
w
Integration
Convolution
Heaviside step function
Dirac delta function
t ~O
t ~O
Function (for t ~ 0) Transform
tn n! "n-1
n -at n!
t e (s + a)n+1
S
cos t»t s2 +W2
-at (s + a)
e cos W t (s+a)2 +W2
s2 _w2
t cos W t (s2+W2)2
cosh to t s
s2 _w2 20
17. Numerical Analysis
Finding roots of equations
Simple iteration
A method which sometimes works for an equation of the form x = f (x) is to iterate
Newton-Raphson
If the equation is Y = f (x) and xn is an approximation to a root then a usually better approximation xn+l is given by
Numerical evaluation of Integrals
Trapezium Rule
f a+h. Y dx "" !!_ [y(a+h) + yea) ]
a 2
Thus, if the interval (a,b) is divided using n equal intervals each of length h,
fb h
a y dx "" 2" [ Yo + 2 (Yl + Y2 + ... + Yn-l ) + Yn ]
Simpson's Rule
f a+2h h
Y dx "" - [ y(a+2h) + 4 y(a+h) + yea) ]
a 3
Thus if the interval (a,b) is divided using n equal intervals, each of length h, with
Distribution Parameters P(X=r) = Pr g(z) E[X] Var[X]
(q = I-p)
Bernoulli O<p<1 pr ql-r q+pz p pq
r = 0,1
Binomial n,O<p<1 (~}, qn-, (q + pz)n np npq
r= O ... n
Geometric O<p<1 qrp p q q
-- - -
r = 0 ... 00 l-qz P p2
Poisson A>O Ar eA(z-l) A A
e-A -
r!
r = 0 ... 00 Continuous Random Variables
The probability that a random variable X takes a value in the range (x, x + dx) is denoted j(x) dx. The cumulative probability function P(X ~x) is denoted F(x). The mean or expected value of X is denoted E[X] and its variance Var[X]. The function g(s) is said to be a generating function for X if,
g(s) = f e-sx f(x)dx
all X
With this definition:
E[X] = ~ = - g'(O)
Var[X] = (J2 = E[X2] - ~2 = g"(O) - g'(0)2
24
Distribution Params f(x) g(s) E[X] Var[X]
Uniform a<b 1 +as -bs a+b (b - a)2
-- e -e --
b-a s(b -a) 2 12
a5, x 5, b
o otherwise
Exponential A>O Ae-Ax A 1 1
-- - -
x~O A+S A ),?
o otherwise
Normal or (»O _1 e+~[~n exp(-s.u +_!_s2(J2) f.1 if
Gaussian (Jf2n 2 (J 2
-00 <x<oo
Standard 1 {I 2} 1 0 1
f2n exp -2x exp( -s2)
Normal 2
-oo<x<oo
Erlang-k k>O .uk(.ukx)k-l e-pkx ( ku r 1 1
- --
(k -I)! ku + s .u k.u2
f.1>0 x~O
o otherwise Standard Normal Distribution
If X has a normal distribution with mean f.1 and standard deviation a (denoted X - N(f.1,a) ), then y = X - f.1 has a normal distribution with mean 0 and standard deviation 1 (i.e. Y - N(O,I))
s
N(O,I) is referred to as the standard normal distribution.
Tables of the cumulative probability function F(z) = ~fz exp{- -21 x2} dx for the standard
...;2n x=-oo
normal distribution, which is usually denoted cP (z) , appear opposite.