You are on page 1of 58

Inner Product Spaces

1. Length and Dot Product in Rn


2. Inner Product Spaces
3. Orthonormal Bases: Gram-Schmidt Process

1. Length and Dot Product in Rn

Length:
The length of a vector v (v1 , v2 ,, vn ) in Rn is given by

|| v || v12 v22

vn 2 ( || v || is a real number)

Notes: The length of a vector is also called its norm

Properties of length (or norm)


(1) v 0
(2) v 1 v is called a unit vector
(3) v 0 iff v 0
(4) cv c v (proved in Theoerm 5.1)

Ex 1:
(a) In R5, the length of v (0 , 2 , 1 , 4 , 2) is given by
|| v || 0 2 (2) 2 12 4 2 (2) 2 25 5

(b) In R3, the length of v (


2

2
17
2

2
17

3
17

) is given by

17
2 2 3
|| v ||
1



17
17 17 17
(If the length of v is 1, then v is a unit vector)

A standard unit vector in Rn: only one component of the vector is


1 and the others are 0 (thus the length of this vector must be 1)

R2 : e1 , e2 1,0 , 0,1
R3 : e1 , e2 , e3 1,0,0 , 0,1,0 , 0,0,1
Rn : e1 , e2 ,

, en 1,0,

,0 , 0,1,

,0 , 0,0,

,1

Notes: Two nonzero vectors are parallel if u cv

(1) c 0 u and v have the same direction


(2) c 0 u and v have the opposite directions

Theorem 1: Length of a scalar multiple

Let v be a vector in Rn and c be a scalar. Then


|| cv || | c | || v ||

Pf:
v (v1 , v2 , , vn )
cv (cv1 , cv2 , , cvn )

|| cv || || ( cv1 , cv2 , , cvn ) ||


(cv1 ) 2 (cv2 ) 2 (cvn ) 2
c 2 (v1 v2 vn )
2

| c | v1 v2 vn
2

| c | || v ||

Theorem 2: How to find the unit vector in the direction of v


v
n
If v is a nonzero vector in R , then the vector u
|| v ||
has length 1 and has the same direction as v. This vector u
is called the unit vector in the direction of v
Pf:
v is nonzero v 0

1
0
v

1
If u
v (u has the same direction as v)
v
v
|| u ||
|| v ||

|| cv || | c | || v ||

1
|| v || 1 (u has length 1)
|| v ||

Notes:
(1) The vector

v
is called the unit vector in the direction of v
|| v ||

(2) The process of finding the unit vector in the direction of v


is called normalizing the vector v

Ex : Finding a unit vector


Find the unit vector in the direction of v = (3, 1, 2), and verify
that this vector has length 1
Sol:
v (3 , 1 , 2) v 32 1 2 2 14
2

v
(3 , 1 , 2)
1

(3 , 1 , 2)
2
2
2
|| v ||
14
3 (1) 2

1
2
3

,
,

14 14 14
2

14
3 1 2

14
14 14 14
v

is a unit vector
v

Distance between two vectors:


The distance between two vectors u and v in Rn is
d (u , v) || u v ||

Properties of distance
(1) d (u , v) 0

(2) d (u , v) 0 if and only if u = v


(3) d (u , v) d ( v , u) (commutative property of the function of distance)

Ex 3: Finding the distance between two vectors


The distance between u=(0, 2, 2) and v=(2, 0, 1) is
d (u , v) || u v || || (0 2 , 2 0 , 2 1) ||
(2) 2 2 2 12 3

Dot product in Rn:


The dot product of u (u1 , u 2 , , u n ) and v (v1 , v2 , , vn )
is a scalar quantity

u v u1v1 u2v2

unvn (u v is a real number)

(The dot product is defined as the sum of component-by-component


multiplications)

Ex 4: Finding the dot product of two vectors


The dot product of u=(1, 2, 0, 3) and v=(3, 2, 4, 2) is

u v (1)(3) (2)(2) (0)(4) (3)(2) 7

Matrix Operations in Excel


SUMPRODUCT: calculate the inner product of two vectors

Theorem 3: Properties of the dot product


If u, v, and w are vectors in Rn and c is a scalar,
then the following properties are true
(1) u v v u

(commutative property of the dot product)

(distributive property of the dot product


(2) u ( v w) u v u w over vector addition)

(3) c(u v) (cu) v u (cv) (associative property of the scalar

multiplication and the dot product)

(4) v v || v || 2
(5) v v 0 ,

and v v 0 if and only if

v0

The proofs of the above properties follow easily from the definition
of dot product

Euclidean n-space:
n
R was defined to be the set of all order n-tuples of real
numbers
n
When R is combined with the standard operations of
vector addition, scalar multiplication, vector length,
and dot product, the resulting vector space is called
Euclidean n-space

Ex : Finding dot products

u (2 , 2) , v (5 , 8), w (4 , 3)
(a) u v

(b) (u v)w (c) u (2v) (d) || w || 2

(e) u ( v 2w)

Sol:
(a ) u v (2)(5) (2)(8) 6

(b) (u v)w 6w 6(4 , 3) (24 , 18)


(c) u (2v) 2(u v) 2(6) 12
(d) || w || 2 w w (4)(4) (3)(3) 25

(e) v 2w (5 (8) , 8 6) (13 , 2)


u ( v 2w) (2)(13) (2)(2) 26 4 22

Ex : (Using the properties of the dot product)


Given u u 39, u v 3, v v 79,
Find (u 2v) (3u v) .
Sol:

(u 2v) (3u v) u (3u v) 2v (3u v)


u (3u) u v (2v) (3u) (2v) v
3(u u) u v 6( v u) 2( v v)
3(u u) 7(u v) 2( v v)
3(39) 7(3) 2(79) 254

Theorem 4: The Cauchy-Schwarz inequality


If u and v are vectors in Rn, then
| u v | || u || || v || (| u v | denotes the absolute value of u v )
(The geometric proof for this inequality is shown on the next slide)

Ex : An example of the Cauchy-Schwarz inequality


Verify the Cauchy-Schwarz inequality for u=(1, 1, 3)
and v=(2, 0, 1)
Sol:

u v 1, u u 11, v v 5
u v 1 1
u v u u v v 11 5 55
uv u v

Dot product and the angle between two vectors


To find the angle (0 ) between two nonzero vectors
u = (u1, u2) and v = (v1, v2) in R2, the Law of Cosines can be
applied to the following triangle to obtain

v u v u 2 v u cos
2

v u (u1 v1 ) 2 (u2 v2 ) 2
2

v v12 v22
2

u u12 u22
2

u1v1 u2 v2
uv
cos

v u
v u

You can employ the fact that |cos |


1 to prove the Cauchy-Schwarz
inequality in R2

The angle between two nonzero vectors in Rn:

uv
cos
, 0
|| u || || v ||
Opposite
direction

u v 0

u v 0



2
2
cos 1 cos 0 cos 0

u v 0

Same
direction

2
cos 0 cos 1

Note:
The angle between the zero vector and another vector is
not defined (since the denominator cannot be zero)

Ex 8: Finding the angle between two vectors

u (4 , 0 , 2 , 2) v (2 , 0 , 1 , 1)
Sol:

u u u

42 02 22 22

24

v v v 22 0 1 12 6
2

u v (4)(2) (0)(0) (2)(1) (2)(1) 12

uv
12
12
cos

1
|| u || || v ||
24 6
144

u and v have opposite directions


(In fact, u = 2v and according to the
arguments on Slide 5.4, u and v are with
different directions)

Orthogonal vectors:
Two vectors u and v in Rn are orthogonal (perpendicular) if

uv 0

Note:
The vector 0 is said to be orthogonal to every vector

Ex 10: Finding orthogonal vectors


Determine all vectors in Rn that are orthogonal to u=(4, 2)
Sol:

u (4 , 2) Let v (v1 , v2 )

u v (4 , 2) (v1 , v2 )
4v1 2v2
0
t
v1
, v2 t
2
t
v ,t , t R
2

Theorem 5: The triangle inequality


If u and v are vectors in Rn, then || u v || || u || || v ||
Pf:
|| u v || 2 (u v) (u v)

u (u v) v (u v) u u 2(u v) v v
|| u || 2 2(u v) || v || 2 || u || 2 2 | u v | || v || 2 (c |c|)
|| u || 2 2 || u || || v || || v || 2 (Cauchy-Schawrz inequality)

(|| u || || v ||) 2

|| u v || || u || || v ||

(The geometric representation of the triangle inequality:


for any triangle, the sum of the lengths of any two sides is
larger than the length of the third side (see the next slide))

Note:
Equality occurs in the triangle inequality if and only if
the vectors u and v have the same direction (in this
situation, cos = 1 and thus u v u v )

Theorem 5.6: The Pythagorean theorem


If u and v are vectors in Rn, then u and v are orthogonal
if and only if

|| u v || || u || || v ||
2

(This is because uv = 0 in the


proof for Theorem 5.5)

The geometric meaning: for any right triangle, the sum of the squares of the
lengths of two legs equals the square of the length of the hypotenuse.

|| u v || || u || || v ||

|| u v ||2 || u ||2 || v ||2

5.23

Dot product and matrix multiplication:


u1
u
u 2


u n

v1
v
v 2


v n

(A vector u = (u1, u2,, un) in Rn can be


represented as an n1 column matrix)

v1
v
u v u T v [u1 u 2 u n ] 2 [u1v1 u 2 v 2 u n v n ]


v n
(The result of the dot product of u and v is the same as the result
of the matrix multiplication of uT and v)

2. Inner Product Spaces

Inner product: represented by angle brackets

u , v

Let u, v, and w be vectors in a vector space V, and let c be


any scalar. An inner product on V is a function that associates
a real numberu , vwith each pair of vectors u and v and
satisfies the following axioms

(1)u , vv , u(commutative property of the inner product)


(distributive property of the inner product

(2)u , v wu , vu , wover vector addition)

property of the scalar multiplication and the


(3) cu , vcu , v(associative
inner product)

(4)v , v 0 and v , v 0

if and only if

v0

Note:

u v dot product (Euclidean inner product for R n )


u , v general inner product for a vector space V

Ex 1: The Euclidean inner product for Rn


Show that the dot product in Rn satisfies the four axioms
of an inner product
Sol:

u (u1 , u2 ,, un ) , v (v1 , v2 ,, vn )
u , v u v u1v1 u2v2 unvn
By Theorem3, this dot product satisfies the required four axioms.
Thus, the dot product can be a kind of inner product in Rn

Ex 2: A different inner product for Rn


Show that the following function defines an inner product
on R2, where u (u1 , u2 ) and v (v1 , v2 )

u , v u1v1 2u 2 v2
Sol:
(1) u , v u1v1 2u2v2 v1u1 2v2u2 v , u
(2) w ( w1 , w2 )

u , v w u1 (v1 w1 ) 2u2 (v2 w2 )


u1v1 u1w1 2u2 v2 2u2 w2
(u1v1 2u2 v2 ) (u1w1 2u2 w2 )
u , vu , w

(3) cu , v c(u1v1 2u2v2 ) (cu1 )v1 2(cu2 )v2 cu , v


(4) v , v v12 2v22 0
v , v 0 v12 2v22 0 v1 v2 0 ( v 0)

Note: Example 2 can be generalized such that


u , v c1u1v1 c2u2v2 cnunvn , ci 0

can be an inner product on Rn

Ex 3: A function that is not an inner product

Show that the following function is not an inner product on R3

u v u1v1 2u2v2 u3v3


Sol:

Let v (1 , 2 , 1)
Then v v (1)(1) 2(2)(2) (1)(1) 6 0

Axiom 4 is not satisfied


Thus this function is not an inner product on R3

Theorem 7: Properties of inner products


Let u, v, and w be vectors in an inner product space V, and
let c be any real number
(1)0, v
v, 0 0
(2)u v, w
u, w
v, w
(3)u, cv cu, v
To prove these properties, you can use only the four
axioms in the definition of inner product
Hint:
(1) 0u, v
0u, v
0, v 0
v, uandw, u v
w, u+w, v
(2) u, v
(3) u, v
v, uand
c u, v
cu, v

The definition of norm (or length), distance, angle, orthogonal, and


normalizing for general inner product spaces closely parallel to
those for Euclidean n-space

Norm (length) of u:

|| u || u , u

Distance between u and v:


d (u , v) || u v || u v, u v

Angle between two nonzero vectors u and v:


u , v
cos
, 0
|| u || || v ||
Orthogonal: (u v)
u and v are orthogonal if u , v 0

Normalizing vectors
(1) If || v || 1 , then v is called a unit vector
(2)

Normalizing

v 0

(if v is not a zero vector)

v
v

(the unit vector in the


direction of v)

Ex 6: Finding inner product

For p a0 a1 x an x n and q b0 b1 x bn x n ,
and

p , q a0b0 a1b1

anbn is an inner product

Let p( x) 1 2 x 2 , q( x) 4 2 x x 2 be polynomials in P2

(a) p , q ?
Sol:

(b) || q || ?

(c) d ( p , q) ?

(a) p , q (1)(4) (0)(2) (2)(1) 2


(b) || q ||
(c)

q , q 42 (2)2 12 21

p q 3 2 x 3x 2
d ( p , q) || p q || p q, p q
(3)2 22 (3)2 22

Properties of norm: (the same as the properties for the dot


product in Rn)

(1) || u || 0
(2) || u || 0 if and only if u 0
(3) || cu || | c | || u ||

Properties of distance: (the same as the properties for the dot


product in Rn)
(1) d (u , v) 0
(2) d (u , v) 0 if and only if u v

(3) d (u , v) d ( v , u)

Theorem 8
Let u and v be vectors in an inner product space V
(1) Cauchy-Schwarz inequality:

| u , v| || u || || v ||

Theorem 4

(2) Triangle inequality:

|| u v || || u || || v ||

Theorem 5

(3) Pythagorean theorem:


u and v are orthogonal if and only if

|| u v || 2 || u || 2 || v || 2

Theorem 6

Orthogonal projections: For the dot product space in Rn, we


define the orthogonal projection of u onto v to be projvu = av (a
scalar multiple of v), and the coefficient a can be derived as
follows
Consider a 0, av a v a v u cos
u
| u || || v || cos | u || || v || u v
uv

projv u av, a 0

uv
v

uv

vv

|| u || || v ||

uv
projv u
v
vv

For inner product spaces:


Let u and v be two vectors in an inner product space V.
If v 0 , then the orthogonal projection of u onto v is
given by
u, v
projv u
v
v, v

Ex 10: Finding an orthogonal projection in R3


Use the Euclidean inner product in R3 to find the
orthogonal projection of u=(6, 2, 4) onto v=(1, 2, 0)
Sol:

u , v (6)(1) (2)(2) (4)(0) 10


v , v 12 22 02 5
u , v
uv
10
projvu
v
v (1, 2 , 0) (2 , 4 , 0)
v , v
vv
5

Theorem 9: Orthogonal projection and distance


Let u and v be two vectors in an inner product space V,
and if v 0, then

d (u, projvu) d (u, cv) , c

u , v
v , v

d (u, cv )

d (u, projv u)

projv u

cv

Theorem 9 can be inferred straightforward by the Pythagorean Theorem, i.e.,


in a right triangle, the hypotenuse is longer than both legs

3.Orthonormal Bases: Gram-Schmidt Process

Orthogonal set:
A set S of vectors in an inner product space V is called an
orthogonal set if every pair of vectors in the set is orthogonal

S v1 , v 2 ,, v n V
v i , v j 0, for i j

Orthonormal set:
An orthogonal set in which each vector is a unit vector is
called orthonormal set

S v1 , v 2 , , v n V
For i j, v i , v j v i , v i v i

For i j , v i , v j 0

Note:
If S is also a basis, then it is called an orthogonal basis or
an orthonormal basis
n
The standard basis for R is orthonormal. For example,

S (1,0,0),(0,1,0),(0,0,1)
is an orthonormal basis for R3

This section identifies some advantages of orthonormal bases,


and develops a procedure for constructing such bases, known
as Gram-Schmidt orthonormalization process

Ex 1: A nonstandard orthonormal basis for R3


Show that the following set is an orthonormal basis
v1
1
1

S
,
, 0 ,
2
2

v2

2
2 2 2

6 , 6 , 3 ,

v3
2 1
2
,

3 3
3

Sol:
First, show that the three vectors are mutually orthogonal
v1 v 2 16 16 0 0
v1 v 3

2
3 2

2
3 2

00

2
2 2 2
v 2 v3

0
9
9
9

Second, show that each vector is of length 1


|| v 1 || v 1 v 1

12 0 1

1
2

|| v 2 || v 2 v 2

2
36

|| v 3 || v 3 v 3

4
9

2
36

8
9

94

1
9

Thus S is an orthonormal set

Because these three vectors are linearly independent (you


can check by solving c1v1 + c2v2 + c3v3 = 0) in R3 (of
dimension 3), by Theorem 4.12, they form a basis for R3.
So S is a (nonstandard) orthonormal basis for R3

Ex 2: An orthonormal basis for P3(x)


In P3 ( x), with the inner product p, q a0b0 a1b1 a2b2 ,

the standard basis B {1, x, x 2 } is orthonormal


Sol:

v1 1 0 x 0 x 2 ,

v2 0 x 0x2 ,

v3 0 0 x x 2 ,

Then
v1 , v 2 (1)(0) (0)(1) (0)(0) 0
v1 , v 3 (1)(0) (0)(0) (0)(1) 0
v 2 , v 3 (0)(0) (1)(0) (0)(1) 0

11 0 0 0 0 1

v1

v1 , v1

v2

v2 , v2

0 0 11 0 0 1

v3

v3 , v3

0 0 0 0 11 1

Theorem 10: Orthogonal sets are linearly independent


If S v1 , v 2 ,, v n is an orthogonal set of nonzero vectors
in an inner product space V, then S is linearly independent
Pf:
S is an orthogonal set of nonzero vectors,

i.e., vi , v j 0 for i j, and vi , vi 0


(If there is only the trivial solution for cis,
i.e., all ci = 0, S is linearly independent)

For c1v1 c2 v 2

cn v n 0

c1v1 c2 v 2

cn v n , vi 0, v i 0

c1 v1 , v i c2 v 2 , v i ci v i , v i cn v n , v i
ci v i , v i 0

vi , vi 0

(because S is an orthogonal set of nonzero vectors)

ci 0 i S is linearly independent

Corollary to Theorem 10:


If V is an inner product space of dimension n, then any
orthogonal set of n nonzero vectors is a basis for V
1. By Theorem 5.10, if S = {v1, v2, , vn} is an orthogonal set of n
vectors, then S is linearly independent
2. According to Theorem 4.12, if S = {v1, v2, , vn} is a linearly
independent set of n vectors in V (with dimension n), then S is a
basis for V
Based on the above two arguments, it is straightforward to
derive the above corollary to Theorem 5.10

Ex 4: Using orthogonality to test for a basis


Show that the following set is a basis for R 4
v1
v2
v3
v4
S {( 2 , 3 , 2 , 2) , (1 , 0 , 0 , 1) , (1 , 0 , 2 , 1) , (1 , 2 , 1 , 1)}

Sol:

v1 , v 2 , v3 , v 4 : nonzero vectors

v1 v 2 2 0 0 2 0

v 2 v 3 1 0 0 1 0

v1 v 3 2 0 4 2 0

v 2 v 4 1 0 0 1 0

v1 v 4 2 6 2 2 0

v3 v 4 1 0 2 1 0

S is orthogonal
S is a basis for R 4 (by Corollary to Theorem 10)
The corollary to Thm. 5.10 shows an advantage of introducing the concept of
orthogonal vectors, i.e., it is not necessary to solve linear systems to test
whether S is a basis (e.g., Ex 2 in Section 4.5) if S is a set of orthogonal vectors

Theorem 11: Coordinates relative to an orthonormal basis


If B {v1 , v 2 , , v n } is an orthonormal basis for an inner
product space V, then the unique coordinate representation of a
vector w with respect to B is

w w, v1 v1 w, v 2 v 2

w, v n v n

The above theorem tells us that it is easy to derive the coordinate


representation of a vector relative to an orthonormal basis, which is
another advantage of employing orthonormal bases

Pf:
B {v1 , v 2 , , v n }

is an orthonormal basis for

wV k1v1 k2 v 2 kn v n V (unique representation from Thm. 4.9)

1
Since vi , v j
0

i j
, then
i j

w, v i (k1 v1 k2 v 2
k1 v1 , v i
ki

kn v n ), v i
ki v i , v i

kn v n , v i

for i = 1 to n

w w, v1 v1 w, v 2 v 2 w, v n v n

Note:
If B {v1 , v 2 , , v n } is an orthonormal basis for V and w V ,

Then the corresponding coordinate matrix of w relative to B is

w B

w , v1
w , v
2

w , v n

Ex
For w = (5, 5, 2), find its coordinates relative to the standard
basis for R3
w, v1 w v1 (5 , 5 , 2) (1 , 0 , 0) 5
w, v 2 w v 2 (5, 5 , 2) (0 , 1 , 0) 5
w, v 3 w v 3 (5 , 5 , 2) (0 , 0 , 1) 2
5
[w ]B 5
2
In fact, it is not necessary to use Thm. 5.11 to find the coordinates relative
to the standard basis, because we know that the coordinates of a vector
relative to the standard basis are the same as the components of that vector
The advantage of the orthonormal basis emerges when we try to find the
coordinate matrix of a vector relative to an nonstandard orthonormal basis
(see the next slide)

Ex 5: Representing vectors relative to an orthonormal basis


Find the coordinates of w = (5, 5, 2) relative to the following
orthonormal basis for R 3
v1

v2

v3

B {( 53 , 54 , 0) , ( 54 , 53 , 0) , (0 , 0 , 1)}
Sol:
w, v1 w v1 (5 , 5 , 2) ( 53 , 54 , 0) 1
w, v 2 w v 2 (5, 5 , 2) ( 54 , 53 , 0) 7
w, v 3 w v 3 (5 , 5 , 2) (0 , 0 , 1) 2
1
[w ]B 7
2

The geometric intuition of the Gram-Schmidt process to find an


orthonormal basis in R2

v2
w2

w1 v1

v2
v1

v1 , v 2 is a basis for R 2

projw1 v 2

w 2 v 2 projw1 v 2 is
orthogonal to w1 v1

w1 w 2
{
,
} is an orthonormal basis for R 2
w1 w 2

Gram-Schmidt orthonormalization process:


B {v1 , v 2 , , v n } is a basis for an inner product space V

Let w1 v1

v 2 , w1
w 2 v 2 projS1 v 2 v 2
w1
w1 , w1

w 3 v3 projS2 v3 v3

v3 , w1
w1 , w1

w1

S1 span({w 1 })
S 2 span({w 1 , w 2 })

v3 , w 2
w2 , w2

w2

The orthogonal projection onto a


vn , wi
subspace is actually the sum of
w i orthogonal projection onto the vectors in
i 1 w i , w i
an orthogonal basis for that subspace (I
will prove it on Slides 5.67 and 5.68)

n 1

w n v n projSn1 v n v n

B' {w1 , w 2 , , w n } is an orthogonal basis


wn
w1 w 2
B '' {
,
, ,
} is an orthonormal basis
w1 w 2
wn

Ex 7: Applying the Gram-Schmidt orthonormalization process


Apply the Gram-Schmidt process to the following basis for R3

v1
B {(1 , 1 , 0) ,

v2
(1 , 2 , 0) ,

v3
(0 , 1 , 2)}

Sol:

w1 v1 (1 , 1 , 0)
v 2 w1
3
1 1
w2 v2
w1 (1 , 2 , 0) (1 , 1 , 0) ( , , 0)
w1 w1
2
2 2

v 3 w1
v3 w 2
w3 v3
w1
w2
w1 w1
w2 w2
1
1/ 2 1 1
(0 , 1 , 2) (1 , 1 , 0)
( , , 0) (0 , 0 , 2)
2
1/ 2 2 2

Orthogonal basis
1 1
B' {w1 , w 2 , w 3} {(1, 1, 0), ( , , 0), (0, 0, 2)}
2 2

Orthonormal basis
w1 w 2 w 3
1 1
1 1
B' ' {
,
,
} {(
,
, 0), (
,
, 0), (0, 0, 1)}
w1 w 2 w 3
2 2
2 2

Ex 10: Alternative form of Gram-Schmidt orthonormalization process


Find an orthonormal basis for the solution space of the
homogeneous system of linear equations
x1 x 2
7 x4 0
2 x1 x 2 2 x3 6 x 4 0

Sol:

1 1 0 7 0
2 1 2 6 0

G.-J.E

1 0 2 1 0
0 1 2 8 0

x1 2s t
2 1
x 2s 8t
2 8
s t
2
x3 s
1 0


x
t

0 1
4

Thus one basis for the solution space is


B {v1 , v 2 } {( 2 , 2 , 1 , 0) , (1 , 8 , 0 , 1)}

w1 v1 and u1

w1 1
2 2 1
2, 2, 1, 0 , , , 0
w1 3
3 3 3

w 2 v 2 v 2 , u1 u1 (due to w 2 v 2

v 2 , u1
u1 and u1 , u1 1)
u1 , u1

2 2 1 2 2 1
1, 8, 0, 1 1, 8, 0, 1 , , , 0 , , , 0
3 3 3 3 3 3

In this alternative form,


3, 4, 2, 1
we always normalize wi
to be ui before
processing wi+1
The advantage of this
method is that it is
to calculate the
easier
orthogonal projection
of wi+1 on u1, u2,, ui

w2
1
u2

3, 4, 2, 1
w2
30
2 2 1 3 4
2
1
B' '
, , ,0 ,
,
,
,
3 3 3 30 30 30 30

Alternative form of the Gram-Schmidt orthonormalization process:


B {v1 , v 2 , , v n }
is a basis for an inner

product space V
w
v
u1 1 1
w1
v1
w2
u2
, where w 2 v 2 v 2 , u1 u1
w2
w3
u3
, where w 3 v 3 v 3 , u1 u1 v 3 , u 2 u 2
w3
n 1
wn
un
, where w n v n v n , u i u i
wn
i 1

{u1 , u 2 ,

, u n } is an orthonormal basis for V

You might also like