You are on page 1of 4

Recursive Least Squares

Recursive least squares arises most frequently when parameters are identified from
recurring (in time) linear algebraic equations. Let a single linear algebraic equation at
time t be written as

(1) ) ( ) ( ... ) ( ) (
2 2 1 1
t b x t a x t a x t a
n n
= + +

where a
j
(t) (j =1, 2, ... , n) and b(t) are known measurement data and x
j
(j =1, 2, ... , n)
are parameters that need to be determined. Let's first evaluate Eq. (1) at the times t
1
, t
2
,
..., t
m
, in which mis greater than n. In matrix-vector form, we get the following set of
over-determined linear algebraic equations

(2)
0 0 0
b x = A

in which the entries of A
0
are given by a
ij
=a
i
(t
j
), (i = 1, 2, ..., n; j =1, 2, ..., m), the
entries of x
0
are given by x
j
(j = 1, 2, ..., n), and the entries of b
0
are given by b
i
(i =1, 2,
..., m). Assume that the matrix A
0
is full rank (although this requirement will later be
relaxed). The least square solution to Eq. (2) is

(3)
0 0
1
0 0 0
) ( b x
T T
A A A

=

Let us now consider what happens when new data arrives. At time t
m+1
the following new
equation arrives:

(4) ) ( ) ( ... ) ( ) (
1 1 2 1 2 1 1 1 + + + +
= + +
m n m n m m
t b x t a x t a x t a

which in matrix-vector form is written

(5) b
T
= x a

in which b = b(t
m+1
). This additional equation can be added at the bottom of the original
set of equations, Eqs. (2), to obtain the over-determined set of equations

(6) b x = A

in which

(6)
(

=
(

=
b
b
A
A
T
0 0
,
b
a


The least squares solution to Eq. (6) is


(7) b x
T T
A A A
1
) (

=
The addition of Eq. (5) to the original set of equations, Eqs. (2), required the solution of
the equations to be calculated over again from scratch. In other words, the original
solution, Eq. (3), was not utilized in obtaining the new solution, Eq. (7). As it turns out,
this is inefficient. Inverses are recalculated over again, requiring a large number of
unnecessary calculations. A different way of calculating the new solution utilizes the
original solution. This method of solution is called recursive least squares. It will be
shown later that the recursive least squares solution is robust as well as efficient.

Let us now develop the recursive least squares solution. Toward this end, several
intermediate steps are necessary. First, notice from Eqs. (6) that

(8a,b)
b A A
A A A A
T T
T T T
a b b
aa
+ =
+ =
0 0
0 0
,


Next, notice that the inverse of a matrix constructed from adding the identity matrix I to
any rank one matrix (de
T
) is itself equal to the identity matrix plus a multiple of the rank
one matrix, written

(9) | |
T T
c I I de de + = +
1


for some value of c. Equation (9) is verified by simply checking its validity. Perform the
following calculation:
(10)
| || |
T T
T T T T
T T T T T T
c I
c c I
c c I c I I
de d e
e d e d de de
de de de de de de
] 1 ) 1 ( [
) (
+ + + =
+ + + =
+ + + = + +


Equation (9) is the inverse of the constructed matrix if Eq. (10) yields the identity matrix,
which is the case if the term in brackets in Eq. (10) is zero. Thus, we let

(11)
d e
T
c
+
=
1
1


Equation (9) is the cornerstone of recursive least squares. It shows us that an inverse of
the identity matrix plus any rank one matrix can be determined without having to take the
inverse explicitly. However, we need to extend this result so that the inverse can be
constructed from any non-singular matrix, not just the identity matrix. Toward that end,
multiply Eq. (9) by the non-singular matrix B
-1
to get

(12)
T T
T T T
cB B c I B
B B B I I B
de de
de de de
1 1 1
1 1 1 1
] [
] [ ] ] [[ ] [


+ = + =
+ = + = +


Letting f
T
=e
T
B,

yields

(13)
d f
df df
1
1 1 1 1
1
1
, ] [


+
=
+ = +
B
c
B cB B B
T
T T



Equations (8) and (13) are used to determine the recursive least squares solution. From
Eq. (7) and (8), and letting a f d = = = and ,
0 0
A A B
T
in Eq. (13),

(14)
) (
) ( ) (
) ( 1
1
) ( ) (
) ( ) ( ) ( ) (
) ) ( 1 ( ) (
) ) ( ( ) ( ) ( ) (
) ( ) (
) ]( ) ( ) ( ) [(
) ( ) ( ) (
0 0
0
1
0 0
1
0 0
0
0
1
0 0
1
0 0 0
1
0 0
1
0 0 0 0
1
0 0
1
0 0
1
0 0
1
0 0 0
1
0 0
1
0 0 0 0
1
0 0
1
0 0
1
0 0 0 0
1
0 0
0 0
1
0 0
1
0 0
1
0 0
0 0
1
0 0
1
x a k x
x a a x
x aa a x
a a a b aa
a a a x
a a a b aa
a b
a b aa
a b aa b x
T
T T
T T
T T T
T T T T T T T
T T T
T T T T T T T
T T T
T T T T T
T T T T T
b
b A A
a A A a
A A c bc A A
A A bc A A A A A A A c
A A bc A A
b A A A A c A A A A A c
c
c
b A A A A A
b A A A A A c A A
b A A A A A A
+ =

(
(

+
+ =
+ =
+ +
+ =
+ +
|
.
|

\
|
+ =
+ + =
+ + = =










in which

(15) a k
1
0 0
1
0 0
) (
) ( 1
1

+
= A A
a A A a
T
T T


Letting , ) ( and ) (
1
0 0 0
1
= = A A P A A P
T T
and considering Eq. (13), the recursive least
squares solution is now written as

(16a-c)
0
0
0
0 0
] [
,
1
1
) (
P I P
P
P
b
T
T
T
ka
a
a a
k
x a k x x
=
+
=
+ =


Notice that the update of x in Eq. (16a) is a vector multiplied by the error b - a
T
x
0

associated with the new equation using the original x. The matrix P
0
is updated in Eq.
(16c) for the next step in the iteration.

In practice, this recursive formula can be initiated by setting P
0
to a large diagonal
matrix, and by letting x
0
be your best first guess. The equations don't need to be put into a
matrix-vector form, i.e., A
0
does not enter into the calculations. The requirement that A
0
is
singular can thus be relaxed. This is why the recursive least squares solution is more
robust than the standard least squares solution.

You might also like