You are on page 1of 22

2.

4 Generalized Inverse and Pseudoinverse 41


If F is constrained by II F \I ::; 1, the set of all the possible end-effector
accelerations forms an ellipsoid. This ellipsoid is called the generalized
inertia ellipsoid (Asada 1983). Note that A-l is symmetric and positive
definite, and J A -1 JT is also symmetric and positive definite if rank 'J =
6. Hence, Eq. 2.110 can be transformed to
F = (JA-
1
JT)-lX
= (JT)-l AJ-
1
x
(if n > 6)
(if n = 6)
2.111
(J A -1 JT)-1 can be interpreted as the inertia matrix that we feel when
we accelerate the end-effector by applying direct force to the end-effector.
When X takes all the values such that II X \I ::; 1, the necessary end-
effector force takes all the values inside the ellipsoid determined by SVD
of(JA-lJT)-l; that is,
2.112
The direction of the largest (smallest) inertia is that of the first (nth)
column vector of U, and the corresponding inertia is the largest (smallest)
singular value.
The coeffici ent J A -1 JT of Eq. 2.110 is referred to as the mechanical
impedance matrix. Note that the mechanical impedance matrix is defined
even if rank J < 6, although the inertia matrix is not . If rank J = 6, the
mechanical impedance matrix and the inertia matrix are inverse matrices
of each other.
2.4 Generalized Inverse and Pseudoinverse
2.4.1 Definitions
For A E Rffixn and X E R"xm, the following equations are used to define a
generalized inverse, a refl exive generalized inverse, and a pseudoinverse of A
(Boullion and Odell 1971):
AXA=A
XAX=X
(AX?=AX
(XA)T = XA
2.113
2.114
2.115
2.116
Equations 2.113 through 2.116 are called the Penrose conditions (Penrose
1955).
42 Chapter 2 Mathematical Toolbox
Definition 2.2 (Generalized Inverse)
A generalized inverse of a matrix A E R
mxn
is a matrix X = A - E
R
nxm
satisfying Eq. 2.113.
Definition 2.3 (Reflexive Generalized Inverse)
A reflexive generalized inverse of a A E R
mxn
is a matrix X =
A;:- E R
nxm
satisfying Eqs. 2.113 and 2.114.
Definition 2.4 (Pseudoinverse)
A pseudoinverse of a matrix A E R
mxn
is a matrix X = A# E Rnxm
satisfying Eqs . 2.113 through 2.116.
A pseudoinverse is sometimes called the Moore-Penrose inverse after the
pioneering works by Moore (1920, 1935) and Penrose (1955).
Example 2.17
Let A E R2X3, P, Q, and R E R
3X2
be as follows :
2.117
Now, we show that P, Q, and R are a generalized inverse, a reflexive
generaiized inverse, and a pseudoinverse of A, respectively.
AP A = ( ! 1
1
! 1) = A
PAP= (!1 P
-1 0
(APf = AP
2.4 Generalized Inverse and Pseudoinverse 43
(
1 -1 -1)
(PA?= -1 1 1 tPA
1 -1 -1
2.118
Equation 2.118 shows that P satisfies Eq. 2.113 only. Hence, P is a
generalized inverse of A.
AQA = (1 -1 1) = A
-1 1 -1
QAQ=O D=Q
(AQ? = ( ~ ~ 1 ) t AQ
(
1 0 0)
(QA)T = -1 0 0 t QA
100
2.119
Equation 2.119 indi cates. that Q satisfies Eqs. 2.113 and 2.114, but does
not satisfy Eqs. 2.115 and 2.116. This result implies that Q is a refl exive
generalized inverse.
ARA = (1 -1 1) = A
-1 1 -1
RAR= - -1
1 ( 1
6 1
(AR? = ( 1
2 -1
(RA? = - -1
1 ( 1
3 1
~ 1 ) = R
-1
~ 1 ) = AR
~ : ~ 1 ) = RA
2.120
R satisfies all four Penrose conditions. Therefore, we conclude that R is
a pseudoinverse of A .
2.4.2 Properties
Generalized InverseYl
(1) For a linear equation
y=Ax
t2.
1
Rao and Mitra 1971 ; Kodama and Suda 1978.
2.121
44 Chapter 2 Mathematical Toolbox
where A E R
ffixn
, X E R
n
, and Y E ~ , a necessary and sufficient
condition for the existence of solution X is
rank [A y] = rank A 2.122
If Eq. 2.122 is satisfied for Eq. 2.121, then
2.123
is a solution of Eq. 2.121.
Example 2.18
We now discuss the solution of Eq. 2.121 with the A matrix given In
Example 2.17, for two different y vectors- namely,
2.124
For both YI and Y2, Eq. 2.122 becomes
rank [A
yd = rank ( ~ 1
-1 1
-1)
- 1
1 -1
1 -
= rank A
Y2] = rank ( ~ 1
2.125
rank [A
-1 1
1) _ 2
1 -1
o -
"# rank A = 1
vVe compute Xl and X2 as follows:
2.126
2.4 Generalized Inverse and Pseudoinverse 45
We verify by the following equations that X l is an exact solution of Eq.
2.121 for Y = YI, but t hat X2 is not an exact solution for Y = Y2:
A XI = ~ 1
-1
~ 1 )
n)
1
( ~ 1 ) = YI
2. 127
AX2 =
( ~ 1
. -1
~ 1 )
m
1
( ~ 1 ) i Y2
(2) For an arbitrary A E Rmxn, there exists at least one generalized inverse
A - and rank A - ~ rank A. A- coincides with a reflexive generalized
inverse if and only if rank A - = rank A .
Example 2.19
Let' s compare the r anks of A , P = A -, and Q = A; used in Example
2.17.
Therefore ,
Moreover,
rank A = rank ( ~ 1
-1
1
1 ) = 1
- 1
r ank A - = rank
( ~
1 O ~ ) =2
rank A; = rank
( ~
1 O ~ ) --1
rank A - > rank A
A - i A ;
rank A; = rank A
2.128
2.129
2.130
46 Chapter 2 Mathematical Toolbox
(3) Generally A-and A; are not unique. If A is square and nonsingular ,
then the generalized inverse A-and the reflexive generalized inverse A;
are unique, and A - = A; = A -1.
(4) AA- and A- A are idempotent.
p

2
Example 2.20
Now, we use A and P = A- in Example 2.17. AA- and A- A become:
AA- =AP
= (!1 ~ )
A-A=PA 2.131
(
!1 ~ 1 !1)
-1 1 -1
Therefore,
(AA-)2 = AA- AA-
= ( ~ 1 ~ ) ( ~ 1 ~ )
= ( ~ 1 ~ ) 0
=AA- 2.132
(A- A)2 = A- AA- A
(
~ 1 ~ 1 ~ 1 ) ( ~ 1 ~ 1 ~ 1 )
-1 1 -1 -1 1 -1
(
~ 1 ~ 1 ~ 1 )
-1 1 -1
=A-A 2.133
We can readily prove statement 4 rigorously by substituting _4_4- A =
A into Eqs. 2.132 and 2.133.
(5) Using an arbitrary matrix U E R
nxm
and a generalized inverse A - , all
the generalized inverses of A can be represented by the following X :
2.134
t2.2 A square matrix M is called idempotent if M2 = M.
2.4 Generalized Inverse and Pseudoinverse 47
This result can be readily shown by
AXA = A(A- + U - A- AUAA-)A
= AA- A+ AU A - (AA- A)U AA- A
= A + AU A - AU A = A 2.135
where AA- A = A was used.
Example 2.21
Once again, we use A and P = A- as in Example 2.17. U E R
3x2
is
chosen as follows:
2.136
The X matrix ofEq. 2.134 is now computed using Eqs . 2.117, 2.131, and
2.136 as follows :
X = A - + U - A-AU AA-
(
1 -1
- -1 1
-1 1
(
0 0)
= 3 2
3 1
2.137
Our claim is that X is another generalized inverse of A. We verify this
claim by checking the Penrose conditions as follows:
AXA= ( ~ 1
-1
1 ) C 0, ( 1
-1
~ 1 ) 1
-1 ~ ~ ) \-1
1
= ( ~ 1
-1
1 ) = A
1 -1
XAX= (!
~ ) C -1
-1 1
1
~ 1 ) (! D
=0
~ 1 ) IX
-2
48 Chapter 2 Mathematical Toolbox
(AXf = ( ~
~ 1 ) T 1= AX
(XAf = G
0
or
- 1
; 1= XA
2.138
- 2
Since only Eq. 2.113 is satisfied among the four Penrose conditions of Eqs .
2.113 through 2.116, X is a generalized inverse, and is neither a reflexive
generalized inverse nor a pseudoinverse. Note that, since a pseudoinverse
is also a generalized inverse and is unique, once we get it, we can compute
every generalized inverse using Eq. 2.134.
Pseudoinverse. t2.3
(1) For a given A E Rmxn, the pseudoinverse A # E R
nxm
is unique, whereas
A - and A;:- are not necessarily unique. Let the sets of A - , A;:-, and
A # be S- , S;, and S#, respectively; then, the following inclusion holds :
S# C S- C S-
r
2.139
(2) (A#)# = A .
(3) (AT)# = (A#)T.
(4) A# = (AT A)# AT = AT(AAT)#.
For A E Rnlxn, if 171 < n ane! rank A = 171, then AAT is nonsingular
ane!
If 171 > n ane! rank A = n, then AT A is n9nsingular and
If 171 = n and rank A = 171, then
Example 2.22
A E R
2X3
is defined as follows :
A --'- (1
- 0
-1
1 ~ )
2.140
2.141
2.142
2.143
Since A is full rank and 171 = 2 < 3 = n, we can use Eq. 2.140 to compute
A#; namely,
p.3Rao and Mitra 1971; Boullion and Odell 1971; Kodama and Suda 1978.
2.4 Generalized Inverse and Pseudoinverse 49
2.144
Equation 2.141 does not work for A because m < nand
det(AT A) = det{ (
D

-1

1
(
-1
D
2.145
= det 2
1
=0
and, therefore, (AT A)-l is not defined. For any two matrices, Ml and
M
2
, that can define a product M
1
M
2
, the following equation holds:
Therefore, for A E R
mxn
, m < n,
rank (AT A) ::; min (rank AT, rank A)
= rank A::; min (m, n) = m
2.147
Since AT A E R
nxn
, AT A is not full rank and det (AT A) = o.
(5) A# A , AA#, E - A# A and E - AA# are all symmetric and idem-
potent, where E represents an identity matrix of appropriate dimension.
Example 2.23
For matrix A as used in Example 2.22, we can compute A# A, AA#,
E - A# A, and E - AA# as follows:
T D
50 Chapter 2 Mathematical Toolbox
AA# _ (1 0)
- 0 1

3 -1 -1
E-AA#-(O 0)
- 0 0
Verify the idempotency of the four matrices.
-1)
-1
1
2.148
(6) If A E R
nxn
is symmetric and idempotent, then, for any matrix B E
Rmxn, the following equation holds (Maciejewski and Klein 1985):
2.149
Example 2.24
We use E - A # A as obtained in Eq. 2.148 as an example of a symmetric
and idempotent matrix. Now, our A E R
3X3
, and B E R
2X3
in Eq. 2.149
are as follows:
(
11
A = 1 1
-1 -1
-1)
-1
1
B = (1 2 3)
-1 0 1
(BA)# is computed as
(BA)# = (-4
3 2
-4 4)}#
2 -2
_ (-0.2 0.1)
- -0.2 0.1
0.2 -0.1
A(BA)# is, then, given by
A(BA)# ( :
1
-1)
(-02
1 -1 -0.2
3 -1
-1 1 0.2
0.1 \
( -0.2
= -0.2
0.1 )
0.2 -0.1
2.150
2.151
01 )
0.1
-0.1
2.152
Equations 2.151 and 2.152 indicate that A(BA)# = (BA)#. The same
result is obtained for any 2 x 3 matrices of' B. This relationship will be
used later to simplify the computation for utilizing kinematic redundancy.
2.4 Generalized Inverse and Pseudoinverse 51
Properties 2, 3, 4, and 6 are readily obtained by verifying Eqs. 2.113
through 2.116. Property 5 is obvious from Eqs. 2.115 and 2.116. For further
properties, see Rao and Mitra (1971), Boullion and Odell (1971), and Kodama
and Suda (1978) .
2.4.3 Solving Linear Equations
The pseudoinverse has wide applications in solving various types of linear
problems. The following theorem is particularly significant within the scope
of this book.
Theorem 2.5 (Least-Squares Solutions)
For a linear equation of X E R
n
Ax=y 2.153
where A E R
mxn
and y E R
m
, the general form of the least-squares
solutions is given by
x=A#y+(E-A#A)z 2.154
where Z E R
n
is an arbitrary vector, and E is an identity matrix. The
minimum norm solution among all the solutions provided by Eq. 2.154 is
2.155
For Eq. 2.153 , the least-squares solution is the X that minimizes the error
norm- namely,
mill II y - Ax II 2.156
where II * II denotes the Euclidean norm of a vector *. The least-squares
solution is not necessarily unique . Every solution is obtained by changing z.
Equation 2.155 implies the solution that also minimizes II x II among all the
solutions given by Eq. 2.154.
When at least one exact solution exists for Eq. 2.153, Eq. 2.154 yields the
general form of all the exact solutions. Note that the first and second terms
of Eq. 2.154 are perpendicular to each other. Indeed,
(A#y)T(E - A# A)z = yT(A#?(E - A# A)z
= yT {(E - A# A)A#}T z
=0
where Eq. 2.114 and the symmetry of E - A# A were used.
2.157
52 Chapter 2 Mathematical Toolbox
Example 2.25
We revisit Example 2.18. The linear equation of Eq. 2.153 is now given
with
_ (1 -1
A - -1 1
and Yl and Y2 E R2 are the following vectors:
Yl = ( -;1)
Y2 = ( ~ )
2.158
2.159
Recall that, in Example 2.18, Eq. 2.153 has an exact solution for Yl, but
does not have one for Y2. The general solution for Y = Yl is computed
by Eq. 2.154 and is investigated using
2.160
For an arbitrary z, the error norm of Eq. 2.156 is identically zero. Indeed,
II Yl - Ax II = II Yl - {AA#Yl + A(E - A# A)z} II
= II (E - AA#)Yl II
= II ~ ( i i) ( -;1) II
=0
2.161
Equation 2.161 verifies that X of Eq. 2.154 with an arbitrary z and Y =
Yl is an exact solution of Eq. 2.153 with Y = Yl . When z = 0 , we have
x ~ A # y, ~ ~ ( ~ : ) ~ X,
2.162
When z = (1 1 Il, for example, we have
2.4 Generalized Inverse and Pseudoinverse 53
2.163
Thus Xl and X2 are two different solutions for Eq. 2.153; they have the
following relationship:
1
II Xl II = v'3 < .J3 = II X2 II
2.164
In general, the solution with z = 0 (namely, Eq. 2.155) provides the
solution with minimum magnitude.
For y = Y2, the error norm of Eq. 2.156 becomes
2.165
Equation 2.165 implies that, since Eq. 2.153 does not have an exact so-
lut ion for y = Y2, X of Eq. 2.154 with an arbitrary z and y = Y2 is an
approximate solution. Note that the norm of error is 1/-/2, regardl ess of
z . When z = 0 , we have the following equation:
x d#y, ~ ( ~ 1 ) ~ x ,
2.166
For Z = (1 1 l)T, we obtain
X = A#Y2 + (E - A# A)z
~ ~ ( 1 -1 1)+ Ul ~ Y) m
~ ~ m x, 2.167
Hence, we have a result similar to Eq. 2.164, as follows :
54 Chapter 2 Mathematical Toolbox
II X3 II = 0.2887 < 1.6583 = II X4 II 2.168
In general, X of Eq. 2.154 with z = 0 and y = Y2 provides the
approximate solution of Eq. 2.153 that has the minimum magnitude.
2.4.4 Weighted Pseudo inverse
Theorem 2.5 offers the general form of the least-squares solutions (Eq. 2.154),
and the least-squares solution with the minimum norm based on the Euclidean
norm (Eq. 2.155). In many physical problems, the components of X or y may
have different physical dimensions. Even if the components are physically
consistent, the significance of magnitude can be different. For example, if
y is the joint torque vector of a robot manipulator, a moderate value of
torque of a large motor would have critical meaning to a small motor. In these
cases, it would be necessary to evaluate the magnitude of error vector and the
magnitude of solution based on an appropriate weighting of the components.
In this subsection, we derive a result similar to Theorem 2.5, but for the
weighted norm.
Let II a Ilw represent the weighted norm such that
II a IIw = JaTWa 2.169
where WE R
nxn
is a symmetric positive definite matrix. A symmetric pos-
iti.ve definite matrix can be represented as follows:
W=WoTWo 2.170
where Wo E R
nxn
is nonsingular. In Eq. 2.170, Wo is not unique. However,
for any symmetric and positive definite matrix W, there exists a unique
symmetric and positive definite matrix Wo satisfying Eq. 2.170, which is
called the square TOot of W.
The following two equivalences can be readily shown:
mm II y - Ax lip <=:} mm II poY - PoAx II 2.171
mIn II X IIQ <=:} mm 1\ Qox II 2.172
where P, Po E R
mxm
and Q, Qo E R
nxn
, and Po, Qo are any matrices
that satisfy Eq. 2.170 for symmetric positive definite P and Q, respectively.
Since Po is nonsingular, Eq. 2.153 can be equivalently transformed to
Po Ax = PoY 2.173
2.4 Generalized Inverse and Pseudoinverse 55
It is obvious that x* = Qox computed from X that minimizes II Poy-
PoAx II minimizes II Poy - PoAQo1X* II, and vice versa, because Qo
is nonsingular . Therefore, the general form of the least-squares solutions of
PoAx = Poy can be represented by
X = Qo
1
x* 2.174
x* = (PoAQo1)# Poy + {E - (PoAQo1)#(PoAQo1)}Z
We can see from Theorem 2.5 that x* = (PoAQo1)# Poy-namely X =
Qo1(PoAQo1)# Poy-is the one that minimizes II x* II = II Qox II among
all the least-squares solutions. From Eqs. 2.171 and 2.172, we can conclude
that Eq. 2.174 provides the general form of the least P-weighted-norm so-
lutions, and that X = Qo1(PoAQo1)# Poy is the minimum Q-weighted-
norm solution among all the least P-weighted-norm solutions. We summarize
this discussion in the following theorem.
Theorem 2.6 (Weighted-Norm Solutions)
For a linear equation of X E R
n
,
Ax=y 2.175
where A E R
mxn
and y E Rm, the general form of the least P-weighted-
norm solutions is given by
X = Qo -1 A*# Poy + Qo -\E - A*# A*)z 2.176
A* ~ PoAQo -1 2.177
where P o and Qo are defined for P and Q by Eq. 2.170, and Z ERn is
an arbitr ary vect or . The minimum Q-weighted-norm solution among all
the solutions provided by Eq. 2.176 is
X=Qo-1
A
*#p
o
Y
where Qo - 1 A *# Po is called a weighted pseudoinverse of A.
Example 2.26
We again discuss a linear equation of Eq. 2.175 with
A=(l
-1
y= Y2 =
-1
1
2.178
2.179
56 Chapter 2 Mathematical Toolbox
Suppose P E R
2X2
and Q E R
3
x3 are given as follows:
o 0)
4 0
o 1
2.180
We choose Po and Qo as the square roots of P and Q; namely,
Po = ( ~
~ )
Qo= G
0
D
2.181
2
0
We have
A* = PoAQol
_ (1
~ ) ( ~ 1
- 1
!1) (:
0
~ r
2
- 0
1
0
= (!4
-3
- ~ 2 ) 6
2.182
( 0.0490 -0.0980 )
A*#= -0.0735 0.1469
0.1469 -0.2939
I
( 0.9184
0.1224
-02449)
E-A*#A*= 0.1224 0.8163 0.3673
-0.2449 0.3673 0.2653
The weighted-norm solutions are compared with the solutions we ob-
tained in Example 2.25. When z = 0 , we obtain the following solution
using Eq. 2.178:
Q
- 1A*#R
X = a OY2
(
3 0 0)-1 I 0.0490
= 0 2 0 I - 0.0735
o 0 1 \ 0.1469
(
0.0163 \
= -0.0367 I X5
\ 0.1469 )
-0.0980 \ (1 0) (1)
0.1469 I
-0.2939} \ 0 2 . 0
2.183
2.4 Generalized Inverse and Pseudoinverse 57
When z = (1 1 1 f, we get the following solution from Eq. 2.176:
x = X5 + Q(j1(E - A *# A*)z
= +
0.1469 0 0 1
(
0.9184 0.1224 -0.2449)
0.1224 0.8163 0.3673
-0.2449 0.3673 0.2653
= X 6
0.5347
2.184
Note that , since Eq. 2.175 with A and y defined by Eq. 2.179 has no
exact solution, as we demonstrated in Example 2.18, X 5 and X 6 are also
approxi mations.
Let 's compare X5 and X 6 with X3 and X4 in Eqs. 2.166 and 2.167,
respectively. The P-weighted norms of the error vectors are as follows:
II Y 2 - A X 3 lip = II Y 2 - AX4 lip = 1.1180
> 0.8944 = II Y 2 - AX5 lip = II Y 2 - A
X
6 lip
2.185
On t he other hand , the Euclidean norms of the error vectors are
II Y 2 - A
X
5 II = II Y2 - A
X
6 II = 0.8246
1 2.186
> V2 = II Y 2 - AX3 II = II Y 2 - AX4 II
Not e t hat the Q-weighted norms of the solutions are
II X5 IIG = 0.1714 < 1.5872 = II X 6 II G
2.187
Equations 2.185 and 2.186 show that X3 and X4 are bet ter approximat ions
than X 5 and X 6 are, in the sense of the Eucli dean norms, and that , on
t he cont rary, X 5 and X 6 are better than X3 and X 4 are, in the sense of
P-weighed norms.
2.4.5 Mappings and Projections
Pseudoinverses possess significant geometri c charact eri stics as proj ections .
The linear mapping by a linear equation
y=Ax 2.188
58 Chapter 2 Mathematical Toolbox
'R(A): range space of A
N(A): null space of A
dim 'R(A) = rank A
dim N(A) = n - rank A
Figure 2.4 Linear mapping by Eq. 2.188.
where A E R
mxn
, y E R
m
, and X ERn, is shown in Fig. 2.4. Let 1?(A) C
R
m
and N(A) C R
n
be the range space
t2
.4 of A, and the null space
t2

5
of
A, respectively. We represent the orthogonal complements
P
.
6
of 1?(A) and
N(A) by 1?(A)l. C Rm and N(A)l. C R
n
, respectively. Four subspaces of
1?(A), 1?(A)l., N(A), and N(A)l. have the following relationships with the
pseudoinverse of A .
1?(A) = N(A#)l. = 1?(AA#) = N(E - AA#)
1?(A)l. = N(A#) = N(AA#) = 1?(E - AA#)
N(A) = 1?(A#)l. = N(A# A) = 1?(E - A# A)
N(A)l. = 1?(A#) = 1?(A# A) = N(E - A# A)
2.189
2.190
2.191
2.192
These four relationships are illustrated conceptually in Fig. 2.5. We can inter-
pret 1?(A)l. and N(A)l. as follows: If y has a nonzero component in 1?(A)l.,
it does not have an exact solution for Eq. 2.188. If x has a nonzero component
in N(A)l., its mapping by Eq. 2.88 is nonzero.
A square matrix M E R
nxn
is called an orthogonal projection if its
mapping M X of X E R
n
is perpendicular to X - M X for any x. It is
P.4The set of all ys obtained by computing Eq. 2.188 for every X E R
n
makes a
linear subspace in Rm. Tllis linear subspace is termed the range space.
t2.
5
The set of all XS that provide y = 0 in Eq. 2.188 makes a. linear space in Rn.
This linear subspace is termed the null space of A .
t
2
.
6
The orthogonal complement of a linear subspace implies a. set of all the vectors
perpendicular to all the vectors in the subspace. The orthogonal complement is also a linear
subspace of the whole space.
2.4 Generalized Inverse and Pseudoinverse 59
noteworthy that AA # , A # A, E - AA # , and E - A # A are all orthogonal
projections .
Example 2.27
Equat ion 2.188 with
(b)
A=(l
-1
(a)
-1
1
y E R
m
y ER
m
(c)
2.193
Figure 2.5 Relationships among the pseudoinverse, the range space, the null
space, and their orthogonal complements. (a) Mapping by A#. (b) Projection
by A # A . (c) Projection by AA #.
60 Chapter 2 Mathematical Toolbox
y E R'"
x E R
n
n(A)
x ERn
y E R'"
(d)
(e)
Figure 2.5 (continued) (d) Projection by E - A# A. (e) Projection by
E-AA#.
is again used as an example. The orthogonal projections are computed as
follows:
AA# =.!. (1 -1)
2 -1 1
2.194
2.4 Generalized Inverse and Pseudoinverse 61
Now, we define the following vectors:
zo=(D
2.195
Yo = ~ )
We can decompose Yo into two orthogonal vectors, which are the mem-
bers ofR(A) and R(A).L and are denoted by YR and YRJ.., respectively.
Similarly, Xo can be decomposed into two orthogonal vectors, which are
the members of N(A) and N(A).L and are denoted by XN and XNJ..,
respectively. They are computed using Eq. 2.194 as follows:
YR = AA#yo
= ~ ( ! 1 ~ 1 ) ( ~ )
= (!1)
YRJ.. = (E - AA#)yo
= ~ ( ~ ~ ) ( ~ )
= ~ C)
XN = (E - A # A)xo
2.196
2.197
We check the results of Eqs. 2.196 and 2.197 by the following equa-
tions:
62 Chapter 2 Mathematical Toolbox
Note that
rank [A YR] = rank
= rank A
-1
1
1
-1
2.198
0.5 )
-0.5
2.199
Therefore, Eq. 2.188 with Y = YR has exact solutions. An exact solution
is the pseudoinverse solution; namely,
x = A#YR

2.200
This solution is equal to the pseudoinverse approximation of Eq. 2.188,
with Y = Yo . Indeed,
Also, -note that
x = A#yo
-1
1
2.4.6 Computation of Pseudoinverse
2.201
2.202
In this subsection, three computational methods of pseudoinverse are summa-
rized.
Computation by Singular Value Decomposition. The computational algo-
rithms for SVD (see, for example, Golub and Van Loan 1983) are reliable
schemes. By Eq. 2.78, the pseudoinverse of a matrix can be computed using
SVD,as in Example 2.12.

You might also like