You are on page 1of 4

Theorem 1. (Rank-Nullity Theorem) Let T : V W a linear transformation between nite dimensional vector spaces.

. Then we have dim(V ) = dim(Ker(T )) + dim(Im(T )) Proof. Suppose (u1 , ..., um ) forms a basis of ker(T). It can then be extended to form a basis of V: (u1 , ..., um , w1 , ..., wn ). We now see that dim(ker(T )) = m, and dim(V ) = m + n, so we only have to show that dim(Im(T )) = n. We will now show that T w1 , .., T wn is a basis of the image of T. Let v V . Then there exist unique scalars such that: v = a1 u1 + ... + am um + b1 w1 + ... + bn wn And also: T v = T (a1 u1 + ... + am um + b1 w1 + ... + bn wn ) because T is a linear transformation we then nd: T v = a1 T u1 + ... + am T um + b1 T w1 + ... + bn T wn and because u1 , ..., um ker(T ) this implies T v = b1 T w1 + ... + bn T wn Therefore (T w1 , ..., T wn ) spans the image of T. We now have to show that these vectors are linearly independant. We can do this by showing that 1 T w1 + ... + n T wn = 0 1 , .., n = 0. Notice that 1 T w1 + ... + n T wn = 0 T (1 w1 + ... + n wn ) = 0, and therefore 1 w1 + ... + n wn ker(T ). We can then write 1 w1 + ... + n wn = 1 u1 + ... + m um 1 w1 + ... + n wn + 1 u1 + ... + m um = 0. But because (u1 , ..., un , w1 , ..., wn ) is a basis of V, the only solution is that 1 , .., n , 1, .., m = 0 therefore (T w1 , ..., T wn ) is linearly independant and is a basis for Im(T). So dim(Im(T )) = n. Theorem 2. (Isomorphisms) Let T : U V be a linear map between nite dimensional vectorspaces, with dim(U ) = dim(V ). Then: 1. T is isomorphic 2. T is monomorphic 3. T is epimorphic are equivalent. Proof. Suppose T is isomorphic, i.e. T is a bijection, then by denition its also monomorphic (injective) and epimorphic (surjective). So 1 = 2, and 1 = 3. Now let T be a monomorphism. Then: Ker(T ) = 0 = dim(Ker(T )) = 0. Because of Theorem 1 we know 0 + dim(Im(T)) = dim(U) = dim(V). Because Im(T ) V and because they have equal dimensions we nd that Im(T) = V. Thus T is an epimorphism, and we nd 2 = 3. Now let T be a epimorphism. Then Im(T ) = V and dim(Im(T )) = dim(V ) so dim(Ker(T )) = 0 and therefore T is monomorphic. So 3 = 2. Because T is an isomorphism if T is monomorphic and epimorphic, and because we know 2 = 3, and 3 = 2, we nd that 2 = 1, and 3 = 1. This concludes our proof. Theorem 3. (Cauchy-Schwarz inequality) |(x, y)| ||x|| ||y|| Proof. Consider ||x ty||2 = (x ty, x ty) 0. Where t is an arbitrary scalar. Now notice: (x ty, x ty) = (x, x ty) t(y, x ty) = (x, x) t(y, x) t((x, y) t(y, y)) 2 = ||x|| t(x, y) t(y, x) + tt||y||2 We now substitute t =
(x,y) ||y||2

= (x ty, x) t(x ty, y)

(y,x) ||y||2

to nd: (x, y)(x, y) (x, y)(x, y) (x, y) (x, y) + ||y||2 ||y||2 ||y||2 ||y||2 ||y||2 |(x, y)|2 = ||x||2 . ||y||2
|(x,y)|2 ||y||2

(x ty, x ty) = ||x||2

Therefore we nd ||x||2

|(x,y)|2 ||y||2

||x||2 |(x, y)|2 ||x||2 ||y||2 |(x, y)| ||x|| ||y||

Theorem 4. (Triangular inequality) For any vectors x, y in an inner product space we have ||x + y|| ||x|| + ||y|| Proof. This is a direct consequence of the Cauchy-Schwarz inequality: |(x, y)| ||x|| ||y||. Consider: ||x + y||2 = (x + y, x + y) = ||x||2 + ||y||2 + (x, y) + (y, x) ||x||2 + ||y||2 + 2|(x, y)| ||x||2 + ||y||2 + 2||x|| ||y|| = (||x|| + ||y||)2

And so we have ||x + y|| ||x|| + ||y||. Theorem 5. (Uniqueness of orthogonal projection) The orthogonal projection PE (v) on a subspace E V , where V is an inner-product space, and v V Proof. Let v1 , v2 , ..., vr be an orthogonal basis in E and w :=
r

k vk ,

where k =

k=1

(v, vk ) . ||vk ||2

We want to show that v wE, however it is sucient to show that v wvk f ork = 1, 2, ..., n. We can compute the inner product we get for k = 1, 2, ..., r (v w, vk ) = (v, vk ) (w, vk ) = (v, vk )
r j=1

j (vj , vk ) = (v, vk ) k (vk , vk ) =

(v, vk ) ||vk ||2 = 0 ||vk ||2

so v wE. Therefore the orthogonal projection PE (v) on E exists and is dened as a vector w such that: 1. w E; 2. v wE. For x, w E, v V and y = w x we have vx=vw+wx=vw+y Because v wE we have yv w and by the Pythagorean Theorem we have ||v x||2 = ||v w||2 + ||y||2 ||v w||2 Note that equality only happens when y = 0, so when x = w. Therefore the orthogonal projection w not only minimizes the distance, but when another vector in E has the same distance to v, this must be w, therefore it is also unique. Theorem 6. (Gram-Schmidt algorithm) Let x1 , .., xn be a basis for V. Then there is also an orthogonal basis v1 , .., vn of V. Proof. Let v1 = x1 and E1 be the space spanned by v1 , i.e. E1 = span(v1 ). We then dene v2 by v2 = x2 PE1 x2 = ,v x2 (x21 ||1 ) . When we now dene E2 = span(v1 , v2 ), we notice that v2 v1 and every vector e2 E2 can be written as a ||v 2 linear combination of x1 , x2 . Therefore E2 = span(x1 , x2 ). Repeating this procedure n times, we nd that span(x1 , x2 , ..., xn ) = span(v1 , v2 , ..., vn ) = V and therefore v1 , .., vn spans V, and because they are orthogonal to each other theyre also linearly independent. Therefore it is also a basis of V. Theorem 7. (An isometry is always strong) Recall that an isometry is a strong isometry if it also preserves the inner product. For any isometry U, we have (U x, U y) = (x, y) Proof. Let U : X Y be an isometry. The proof uses the polarization identity, which for a complex space is given as: (x, y) = ||x + y||2
=1,i

1 (||x + y||2 ||x y||2 + i||x + iy||2 i||x iy||2 ) 4

We nd (U x, U y) = = = ||U x + U y||2 ||U (x + y)||2 ||x + y||2

=1,i

=1,i

=1,i

= (x, y) Because R C this is also true for a real space.

Theorem 8. A matrix A is diagonalizable (i.e., unitarily equivalent to a diagonal one) if and only if it has an orthogonal, and therefore orthonormal, basis of eigenvectors. Proof. Let the matrix A be unitarily equivalent to a diagonal one, A = U DU . The vectors ek of the standard basis are eigenvectors of D, so Dek = ek . Because U is unitary we have U U = I, and therefore we nd AU ek = U DU U ek = U Dek = U ek = U ek and therefore U ek is an eigenvector of A. Because U is unitary, and ek is an orthogonal basis, U ek is an orthogonal basis as well. So A has an orthogonal basis of eigenvectors Now let A have an orthogonal basis u1 , ..., un of eigenvectors. We can always divide by the length of the vectors so we can assume this is an orthonormal basis. Let D be the matrix of A in the basis B = u1 , ..., un . Because these are eigenvectors, D is a diagonal matrix. Consider U to be the matrix with columns u1 , ..., un . Because these columns form an orthonormal basis, U is unitary. We now nd, where S is the standard basis ek : A = [A]SS = [I]SB [A]BB [I]BS and this is exactly U DU 1 , but because U is unitary this also equals U DU therefore we nd A = U DU . And so A is diagonalizable. Theorem 9. (Schur representation) Let A be a matrix. Then A = U T U where U is unitary, and T is an upper triangular matrix. Proof. We will prove this by induction in dim(X). The theorem is true for dim(X) = 1, since any 1 1 matrix is upper triangular. We now suppose the theorem is true for dim(X) = n 1, and we want to show that if its true for dim(X) = n 1 then its also true for dim(X) = n. Let 1 be an eigenvalue of A, and u1 be a corresponding eigenvector, such that Au1 = 1 u1 . Denote E = u , and let 1 v2 , .., vn be a basis of E. Notice that dim(E) = dim(X) 1 = n 1. So u1 , v2 , .., vn is a basis in X. In this basis the matrix of A is upper triangular. The rst column of A is given by (1 , 0, .., 0), and the rst row of A is given by (1 , , ..., ) where * are numbers that arent important. Because the lower right section of A, i.e. the section of the matrix that isnt the rst column or row, denes a linear transformation in E, and because dim(E) = n 1, by our induction hypothesis we can choose a basis such that this section is upper triangular. So A is upper triangular as well. By mathematical induction we have shown that any matrix A can be written as A = U T U where U is unitary, and T is an upper triangular matrix. Theorem 10. (Diagonalizability of normal matrices) Any matrix satisfying N N = N N , i.e. any normal matrix, can be represented as N = U DU where U is a unitary matrix, and D is a diagonal one. Proof. We will rst assume that N has been (upper) triangulized by Theorem 9. We will then show that this representation is in fact a diagonal one. We will use induction in the dimension of the matrix. If N is a 1 1 matrix then its most denately diagonal, since there are no non-diagonal elements. Now suppose we have proved for any matrix that any (n 1) (n 1) upper-triangular matrix is diagonal. Let N be an n n (upper triangular) normal matrix. The rst row of this matrix can be written as a1,1 , ..., a1,n . The rst column can be written as a1,1 , 0, ..., 0. Lets call the lower right section of the matrix N1 . Let us now compare the upperleft entries of both N N and N N . We nd that: (N N )1,1 = a1,1 a1,1 = |a1,1 |2 and for N N we nd

(N N )1,1 = a1,1 a1,1 + ...1,n a1,n a = |a1,1 |2 + ... + |a1,n |2 Because N N = N N the upperleft entries should be equal, and this can only be the case when a1,2 , .., a1,n = 0. There for the rst row of N can be written as a1,1 , 0, ..., 0. Now we have to show that to N1 too is diagonalizable. Because N N = N N and because the lower-right sections of these matrices are N1 N1 and N1 N1 respectively, these to must also be equal. So N1 represents a normal matrix and by the induction hypothesis this matrix must also be diagonal.

You might also like