You are on page 1of 7

Geospatial Science

RMIT

CONSTRAINED LEAST SQUARES

Least Squares is used extensively in the analysis and adjustment of survey network measurements.
In the majority of applications the measurements (say, directions, distances, height differences, etc)
are connected to the unknowns (say coordinates and heights of points) by properly posed models,
i.e., there are sufficient "fixed" points in the network to properly define the coordinate origin, the
orientation of the network and the datum for heights. In such cases it may be desirable for certain
unknowns to accord with geometric conditions, say for instance, the Reduced Levels (RL) of two
unknown points in a height network are to be held at a fixed height difference. We may call this a
constraint on the adjusted RL's. Or, in a traverse network, we may wish to hold a particular line of
the traverse to a fixed bearing. Here we may say that the adjustment has been constrained by the
fixed bearing. In these cases, the geometric conditions can be expressed as constraint equations
linking the adjusted quantities to certain values. These constrain equations can be added to the
normal equations and the combined system solved. The addition of constraint equations lends
flexibility to survey network adjustment.

In cases where the adjustment model is not properly posed, i.e., the coordinate datum is not fixed,
the network orientation is unknown or the height datum is not fixed, constraint equations can be
used to correct for these datum defects. In such cases a minimum number of constraints are
required to obtain a solution of the network problem. For example, in a 2-Dimensional survey
network of directions and distances the minimum number of constraints would be one fixed point
(two coordinates held fixed) and a fixed bearing of a line in the network, or a single fixed point and
one other coordinate in the network. Also, constraint equations can be used in "free net"
adjustments where every point in the network is regarded as floating and there are no fixed points.
In free net adjustments, the constraint equations must take a certain form that will be discussed in
subsequent notes.

These notes follow closely the techniques and notation in Observations and Least Squares by E.M.
Mikhail (Mikhail 1976).

2006, R.E. Deakin

Constrained Least Squares

Geospatial Science

RMIT

Consider a set of n observation equations in u unknowns ( n > u ) in the matrix form


v + Bx = f

(1)

v is the n 1 vector of residuals


x is the u 1 vector of unknowns (or parameters)
B is the n u matrix of coefficients (design matrix)
f is the n 1 vector of numeric terms (constants)
n is the number of equations (or observations)
u is the number of unknowns

Also, suppose that the unknowns x must satisfy certain CONSTRAINTS expressed as equations
and written in matrix form as

Cx = g

(2)

C is the c u matrix of coefficients


g is the c 1 vector of numeric terms (constants)
c is the number of constraint equations

The Least Squares condition is enforced by minimizing the function

= vT Wv 2k T ( Cx g ) minimum

(3)

W is the n n weight matrix


k is the c 1 vector of Lagrange multipliers

To find an expression for v T Wv consider equation (1) which can be rearranged as


v = f Bx

and using the matrix rules for transposition

v T = ( f Bx )

= f T ( Bx )

= f T x T BT
hence

2006, R.E. Deakin

Constrained Least Squares

Geospatial Science

RMIT

v T Wv = ( f T x T BT ) W ( f Bx )
= ( f T x T BT ) ( Wf WBx )
= f T Wf f T WBx x T BT Wf + xT BT WBx

and each term of this equation is a scalar quantity. Now, since ( xT BT Wf ) = f T WBx , noting that
T

W is symmetric, hence W T = W , then


v T Wv = f T Wf 2f T WBx + x T BT WBx

(4)

N = BT WB

(5)

t = BT Wf

(6)

Making the substitutions

and noting that t T = ( BT Wf ) = f T WB , equation (4) becomes


T

v T Wv = f T Wf 2t T x + x T Nx

(7)

Substituting equation (7) into equation (3) gives

= f T Wf 2t T x + x T Nx 2k T ( Cx g )
Minimizing by equating the derivative

(8)

to zero gives
x

= 2 t T + 2 x T N 2 k T C = 0
x
Dividing by 2, transposing and rearranging gives
Nx CT k t = 0

(9)

N is the u u coefficient matrix of the least squares normal equations


t is the u 1 vector of numeric terms
Equations (2) and (9) can be expressed as
Nx + CT k + t = 0
Cx + 0k g = 0
or in partitioned matrix form
2006, R.E. Deakin

Constrained Least Squares

Geospatial Science

RMIT

N CT x t
=

0 k g
C

(10)

The orders of the sub-matrices and matrices are


N uu

Ccu

CTuc

0cc

x u1
t u1
;
;

k c1
g c1 (u + c )1
( u + c )1
( u + c )( u + c )

Equation (10) can be solved directly for x and k by


1

x N CT t

k = C
0 g

(11)

Note that in the case where N is singular (usually because of "datum" problems leading to
rank ( N ) < u ), the coefficient matrix in equation (10) will be non-singular provided the constraint

equations Cx = g have been properly chosen.

An alternative solution for x can be obtained from equation (10), but only in the case where N is
non-singular (the usual case if there are no datum problems), using a reduction process given by
Cross (1992, pp. 22-23).
Consider the partitioned matrix equation P y = u given as
P11 P12 y1 u1
P P y = u
21 22 2 2

(12)

which can be expanded to give


P11 y1 + P12 y 2 = u1

or
y1 = P111 ( u1 P12 y 2 )

2006, R.E. Deakin

(13)

Constrained Least Squares

Geospatial Science

RMIT

Eliminating y1 by substituting (13) into (12) gives


P11 P12 P111 ( u1 P12 y 2 ) u1
= u
P P
y2
22
21

Expanding the matrix equation gives


P21P111 ( u1 P12 y 2 ) + P22 y 2 = u 2
P21P111u1 P21P111P12 y 2 + P22 y 2 = u 2

and an expression for y 2 is given by


y 2 = ( P22 P21P111P12 )

(u

P21P111u1 )

(14)

Applying equations (13) and (14) to equation (10) gives


x = ( N )

( t C k )
T

= N 1t + N 1CT k

k = 0 C ( N ) CT
1

(15)

) (g C ( N )
1

= ( CN 1CT )

( g CN t )
) g ( CN C )
1

= ( CN 1CT

( t ))

CN 1t

(16)

Note that these solutions for x and k are only possible when N is non-singular, i.e., N 0 and N 1
exists. Substituting equation (16) into equation (15) gives a solution for x as
x = N 1t + N 1CT ( CN 1CT ) g N 1CT ( CN 1CT ) CN 1t
1

Making the substitutions


M = ( CN 1CT )

and x = N 1t

(17)

gives
x = x + N 1CT M ( g Cx)

2006, R.E. Deakin

(18)

Constrained Least Squares

Geospatial Science

RMIT

COVARIANCE PROPAGATION TO FIND Q xx

The solution for x and k, given by equation (11), may be written as

z = Dw

(19)

and using the general law of propagation of variances (linear functions)

Q zz = DQ wwDT

(20)

where
Q
x
z = and Q zz = xx
k
Qkx
1

Q xk
Qkk

(21)

N CT

D=
=
0
C

Qtt
t
w = and Q ww =
g
Q gt

(22)

Qtg N 0
=
Q gg 0 0

(23)

Note that the cofactor matrices Q gg = 0, Qtg = 0 and Q gt = 0 since g is a vector of constants. The

cofactor matrix Qtt = N can be obtained from propagation of variances understanding that
t = BT Wf and Qtt = ( BT W ) Q ff ( BT W ) = N since Q ff = Q = W 1 .
T

Multiplying the matrices above gives


Q
Q zz = xx
Qkx

Q xk
=
Qkk

N 0

0 0

N N
=

N N

from which we obtain


Q xx = N
1

N CT

But, from above, D =


=
0
C

2006, R.E. Deakin

(24)

N CT

and we may write

0

C

I 0
=
0 I

Constrained Least Squares

Geospatial Science

RMIT

giving
N + CT = I
N = I + CT
C = 0

C = 0

(since is symmetric)

Substituting these results into equation (24) gives


Q xx = ( I + CT )
= + CT
but from above CT = 0 hence
Q xx =

(25)

and the matrix of order u u is the upper-left sub-matrix of D in equation (22).

REFERENCES
Cross, P.A. 1992, Advanced Least Squares Applied to Position Fixing, Working Paper No. 6,
Department of Land Information, University of East London.
Mikhail, E.M., 1976, Observations and Least Squares, IEPA Dun-Donnelley Publisher, New
York.

2006, R.E. Deakin

Constrained Least Squares

You might also like