You are on page 1of 27

LIST OF PRACTICALS BASED ON CSDC1-201 :LINEAR ALGEBRA FOR COMPUTER SCIENCE

Practical Title

1.

Create and transform vectors and matrices (the transpose vector (matrix)& conjugate transpose of a vector (matrix)) Solve system of Homogeneous and non homogeneous equations using Gauss elimination, Cramers rule. Generate the matrix into echelon form and find its rank. Generate the LU decomposition of a matrix. Find cofactors, determinant, adjoint and inverse of a matrix Generate basis of column space, null space, row space and left null space of a matrix space Solution of system of equations using numerical methods Check the linear dependency of vectors . Generate a linear combination of given vectors of Rn/ matrices of same size and find the transition matrix of given matrix space Find the orthonormal basis of a given vector space using Gram-Schmidt orthogonalization process
Check the diagonalizable property of matrices and find the corresponding eigen value and verify the Cayley- Hamilton theorem Problems on LU factorization Application of Linear algebra: Coding and decoding of messages using non singular matrices. eg code Linear Algebra is a fun and then decode it Solve Linear programming problem (graphical and simplex method )

2.

3. 4.

5. 6. 7.

9.

10. 11.

12.

Create a Matrix
Page 1 of 27

Matrices are represented in Mathematica with lists. They can be entered directly with the { } notation, constructed from a formula, or imported from a data file. Mathematica also has commands for creating diagonal matrices, constant matrices, and other special matrix types.
A matrix can be entered directly with
In[125]:= Out[125]=

notation:

You can show the result in matrix notation with MatrixForm:


In[126]:=
Out[126]//MatrixForm=

is another way of entering formatting function. This uses Table to create a grid of values in
In[127]:=

. It can be convenient to use it when and :

is a

In[129]:=
Out[129]//MatrixForm=

// This is 4x5 matrix

Note that matrices in Mathematica are not restricted to contain numbers; they can contain symbolic entries such as formulas:
In[130]:=

Out[131]//MatrixForm=

This is 4 x 3 matrix with function xi + xj and the initial value of i and j is 1

When you create a matrix and save it with an assignment, take care not to combine this with formatting using MatrixForm. Use parentheses:
In[132]:=
Out[132]//MatrixForm=

You can use


In[133]:=

in further calculations:

Out[133]//MatrixForm=

Page 2 of 27

Suppose you do not use parentheses:


In[134]:=
Out[134]//MatrixForm=

Then will print like a matrix but will not work in calculations like a matrix. For example, the following does not carry out matrix multiplication:
In[135]:= Out[135]=

You can check the value of


In[11]:=
Out[11]//FullForm=

by using FullForm:

This shows that matrix.

also includes the formatting wrapper MatrixForm, which stops it from working as a

There are functions to create a variety of special types of matrices. This creates a 45 matrix of real values that fall between and :
In[136]:=
Out[136]//MatrixForm=

This creates a matrix that only has nonzero entries on the diagonal:
In[137]:=
Out[137]//MatrixForm=

This creates a matrix whose entries are all the same:


In[138]:=
Out[138]//MatrixForm=

Page 3 of 27

This creates a 44 Hilbert matrix; each entry is of the form


In[139]:=
Out[139]//MatrixForm=

Many linear algebra and other functions return matrices. Here, the QR decomposition of a random 33 matrix is calculated:
In[140]:=

This prints the Q matrix:


In[142]:=
Out[142]//MatrixForm=

When Mathematica functions return matrices they often use an optimized storage format called packed arrays. You can apply many common operations in Mathematica to a list, and get back another list with the function mapped onto each element. This also works for matrices, which are lists of lists. Here is a 22 matrix of squares:
In[143]:=

Out[144]//MatrixForm=

This applies Sqrt to each element of the matrix:


In[145]:=
Out[145]//MatrixForm=

This behavior of Sqrt is called listability, and it makes very readable and efficient code.

Vectors and Matrices


Vectors and matrices in Mathematica are simply represented by lists and by lists of lists, respectively.
{a,b,c} {{a,b},{c,d}} matrix The representation of vectors and matrices by lists. vector

Page 4 of 27

Because of the way Mathematica uses lists to represent vectors and matrices, you never have to distinguish between "row" and "column" vectors. Table[f,{i,n}] Array[a,n] Range[n] Range[n1,n2] Range[n1,n2,dn]
build a length-n vector by evaluating f with build a length-n vector of the form create the list create the list create the list give the i element in the vector list

list[[i]]orPart[list,i]
Length[list]

give the number of elements in list multiply a vector by a scalar dot product of two vectors cross product of two vectors (also input as Euclidean norm of a vector build an mn matrix by evaluating f with i ranging from 1 tom and j ranging from 1 to n )

c v a.b
Cross[a,b] Norm[v]
Functions for vectors.

Table[f,{i,m},{j,n}] Array[a,{m,n}] IdentityMatrix[n] DiagonalMatrix[list]

build an mn matrix with generate an nn identity matrix

element

generate a square matrix with the elements in list on the main diagonal

list[[i]]orPart[list,i]

give the i

row in the matrix list column in the matrix list element in the matrix list

list[[All,j]]orPart[list,All,j] give the j list[[i,j]]orPart[list,i,j]


Dimensions[list]
Functions for matrices. give the

give the dimensions of a matrix represented by list display the elements of list in a column display list in matrix form with elements .

Column[list] MatrixForm[list]
This builds a 33 matrix

Formatting constructs for vectors and matrices. In[12]:= Out[12]= This displays in standard two-dimensional matrix format. In[13]:=
Out[13]//MatrixForm=

This gives a vector with symbolic elements. You can use this in deriving general formulas that are valid with any choice of vector components. In[14]:=

Page 5 of 27

Out[14]= This gives a 32 matrix with symbolic elements. "Building Lists from Functions" discusses how you can produce other kinds of elements with Array. In[15]:= Out[15]= Here are the dimensions of the matrix on the previous line. In[16]:= Out[16]= This generates a 33 diagonal matrix. In[17]:= Out[17]=

c m a.b
Inverse[m] MatrixPower[m,n] Det[m] Tr[m] Transpose[m] Eigenvalues[m] Eigenvectors[m]

multiply a matrix by a scalar dot product of two matrices matrix inverse

power of a matrix

determinant trace transpose eigenvalues eigenvectors

Some mathematical operations on matrices. Here is the 22 matrix of symbolic variables that was defined. In[18]:= Out[18]= This gives its determinant. In[19]:= Out[19]= Here is the transpose of . In[20]:= Out[20]= This gives the inverse of in symbolic form. In[21]:= Out[21]= Here is a 33 rational matrix. In[22]:= Out[22]= This gives its inverse.

Page 6 of 27

In[23]:= Out[23]= Taking the dot product of the inverse with the original matrix gives the identity matrix. In[24]:= Out[24]= Here is a 33 matrix. In[25]:= Out[25]= Eigenvalues gives the eigenvalues of the matrix. In[26]:= Out[26]= This gives a numerical approximation to the matrix. In[27]:= Out[27]= Here are numerical approximations to the eigenvalues. In[28]:= Out[28]=

ConjugateTranspose
The conjugate transpose of an matrix is the matrix defined by

(1)
where denotes the transpose of the matrix and denotes the conjugate matrix. In all common spaces (i.e., separable Hilbert spaces), the conjugate and transpose operations commute, so

(2)
The symbol (where the "H" stands for "Hermitian") gives official recognition to the fact that for complex matrices, it is almost always the case that the combined operation of taking the transpose and complex conjugate arises in physical or computation contexts and virtually never the transpose in isolation (Strang 1988, pp. 220-221). The conjugate transpose of a matrix is implemented in Mathematica as ConjugateTranspose[A].

The conjugate transpose is also known as the adjoint matrix, adjugate matrix, Hermitian adjoint, or Hermitian transpose (Strang 1988, p. 221). Unfortunately, several different notations are in use as summarized in the following table. While the notation universally used in quantum field theory, is commonly used in linear algebra. Note that because the complex conjugate, special care must be taken not to confuse notations from different sources. is is sometimes used to denote

Page 7 of 27

notation references This work; Golub and van Loan (1996, p. 14), Strang (1988, p. 220) Courant and Hilbert (1989, p. 9), Lancaster and Tismenetsky (1984), Meyer (2000) Arfken (1985, p. 210), Weinberg (1995, p. xxv)

If a matrix is equal to its own conjugate transpose, it is said to be self-adjoint and is called a Hermitian.

Conjugate transpose of a complex-valued matrix:


In[1]:= In[2]:=
Out[2]//MatrixForm=

In[3]:=
Out[3]//MatrixForm=

Enter using Esc ct Esc : In[1]:= Out[1]= Conjugate transpose a sparse array: In[1]:= Out[1]= In[2]:=
Out[2]//MatrixForm=

The conjugate transpose is sparse: In[3]:= Out[3]= In[4]:=


Out[4]//MatrixForm=

Page 8 of 27

ConjugateTranspose works for symbolic matrices:


In[1]:= Out[1]=

ComplexExpand assumes all variables are real:


In[2]:= Out[2]=

Generalizations & Extensions


In[1]:= Out[1]=

(1)

ConjugateTranspose works similarly to Transpose for tensors:

Conjugate and transpose the first two dimensions: In[2]:= Out[2]=

Conjugate and transpose the first and third dimensions: In[3]:= Out[3]=

Applications

(1)

is a random complex matrix: In[1]:= Find the QRDecomposition of In[2]:= is unitary, so its inverse is :

Page 9 of 27

In[3]:= Out[3]= Reconstruct In[4]:= Out[4]= from the decomposition:

Properties & Relations


In[1]:= In[2]:= Out[2]=

(2)

ConjugateTranspose[m] is equivalent to Conjugate[Transpose[m]]:

The product of a matrix and its conjugate transpose is Hermitian: In[1]:= is the matrix product of In[2]:= so In[3]:= Out[3]= is Hermitian: and .

Entering Tables and Matrices


The Mathematica front end provides an Insert Table/Matrix submenu for creating and editing arrays with any specified number of rows and columns. Once you have such an array, you can edit it to fill in whatever elements you want.
Mathematica treats an array like this as a matrix represented by a list of lists. In[1]:= Out[1]= Putting parentheses around the array makes it look more like a matrix, but does not affect its interpretation. In[2]:= Out[2]= Using MatrixForm tells Mathematica to display the result of the Transpose as a matrix. In[3]:=
Out[3]//MatrixForm=

Page 10 of 27

Complex Matrix A matrix whose elements may contain complex numbers. The matrix product of two complex matrices is given by

(1)
where

(2) (3) (4) (5) (6) (7) (8) (9)

Hadamard (1893) proved that the determinant of any complex

matrix

with entries in the closed unit disk

satisfies

(10)
(Hadamard's maximum determinant problem), with equality attained by the Vandermonde matrix of the for , 2, ... are 1, 2, , 16, , 216, .... roots of unity (Faddeev and Sominskii 1965, p. 331; Brenner 1972). The first few values

Determining the Eigenvalues of a Matrix


Studying the maximum possible eigenvalue norms for random complex matrices is computationally intractable. Although average properties of the distribution of can be determined, finding the maximum value corresponds to determining if the set of matrices contains a singular matrix, which has been proven to be an NP-complete problem (Poljak and Rohn 1993, Kaltofen 2000). The above plots show the distributions for , , and matrix eigenvalue norms for elements uniformly distributed inside the unit disk . Similar plots are obtained for elements uniformly distributed inside . The exact distribution of eigenvalues for complex matrices with both real and imaginary parts distributed as independent standard normal variates is given by Ginibre (1965), Hwang (1986), and Mehta (1991).

Given a square matrix A, the condition that characterizes an eigenvalue, , is the existence of a nonzero vector xsuch that A x = x; this equation can be rewritten as follows:

Page 11 of 27

This final form of the equation makes it clear that x is the solution of a square, homogeneous system. Ifnonzero solutions are desired, then the determinant of the coefficient matrixwhich in this case is A Imust be zero; if not, then the system possesses only the trivial solution x = 0. Since eigenvectors are, by definition, nonzero, in order for x to be an eigenvector of a matrix A, must be chosen so that

When the determinant of A I is written out, the resulting expression is a monic polynomial in . [A monic polynomial is one in which the coefficient of the leading (the highestdegree) term is 1.] It is called the characteristic polynomial of A and will be of degree n if A is n x n. The zeros of the characteristic polynomial of Athat is, the solutions of the characteristic equation, det( A I) = 0are the eigenvalues of A.

Gaussian elimination is usually carried out using matrices. This method reduces the effort in finding the solutions by eliminating the need to explicitly write the variables at each step. The previous example will be redone using matrices.
Example 2: Solve this system:

The first step is to write the coefficients of the unknowns in a matrix:

This is called the coefficient matrix of the system. Next, the coefficient matrix is augmented by writing the constants that appear on the righthand sides of the equations as an additional column:

This is called the augmented matrix, and each row corresponds to an equation in the given system. The first row, r 1 = (1, 1, 3), corresponds to the first equation, 1x + 1 y = 3, and the second row, r 2 = (3, 2, 4), corresponds to the second equation, 3 x 2 y = 4. You may choose to include a vertical lineas shown aboveto separate the coefficients of the unknowns from the extra column representing the constants. Now, the counterpart of eliminating a variable from an equation in the system is changing one of the entries in the coefficient matrix to zero. Likewise, the counterpart of adding a multiple of one equation to another is adding a multiple of one row to another row. Adding 3 times the first row of the augmented matrix to the second row yields

Page 12 of 27

The new second row translates into 5 y = 5, which means y = 1. Backsubstitution into the first row (that is, into the equation that represents the first row) yields x = 2 and, therefore, the solution to the system: (x, y) = (2, 1). Gaussian elimination can be summarized as follows. Given a linear system expressed in matrix form, A x = b, first write down the corresponding augmented matrix:

Then, perform a sequence of elementary row operations, which are any of the following: Type 1. Interchange any two rows. Type 2. Multiply a row by a nonzero constant. Type 3. Add a multiple of one row to another row. The goal of these operations is to transformor reducethe original augmented matrix into one of the form where A is upper triangular (aij = 0 for i > j), any zero rows appear at the bottom of the matrix, and the first nonzero entry in any row is to the right of the first nonzero entry in any higher row; such a matrix is said to be in echelon form. The solutions of the system represented by the simpler augmented matrix, [ A | b], can be found by inspectoin of the bottom rows and backsubstitution into the higher rows. Since elementary row operations do not change the solutions of the system, the vectors x which satisfy the simpler systemA x = b are precisely those that satisfy the original system, A x = b. Example 3: Solve the following system using Gaussian elimination:

The augmented matrix which represents this system is

The first goal is to produce zeros below the first entry in the first column, which translates into eliminating the first variable, x, from the second and third equations. The row operations which accomplish this are as follows:

Page 13 of 27

The second goal is to produce a zero below the second entry in the second column, which translates into eliminating the second variable, y, from the third equation. One way to accomplish this would be to add 1/5 times the second row to the third row. However, to avoid fractions, there is another option: first interchange rows two and three. Interchanging two rows merely interchanges the equations, which clearly will not alter the solution of the system:

Now, add 5 times the second row to the third row:

Since the coefficient matrix has been transformed into echelon form, the forward part of Gaussian elimination is complete. What remains now is to use the third row to evaluate the third unknown, then to backsubstitute into the second row to evaluate the second unknown, and, finally, to backsubstitute into the first row to evaluate the first unknwon. The third row of the final matrix translates into 10 z = 10, which gives z = 1. Backsubstitution of this value into the second row, which represents the equation y 3z = 1, yields y = 2. Backsubstitution of both these values into the first row, which represents the equation x 2 y + z = 0, gives x = 3. The solution of this system is therefore (x, y, z) = (3, 2, 1). Example 4: Solve the following system using Gaussian elimination:

For this system, the augmented matrix (vertical line omitted) is

Page 14 of 27

First, multiply row 1 by 1/2:

Now, adding 1 times the first row to the second row yields zeros below the first entry in the first column:

Interchanging the second and third rows then gives the desired uppertriangular coefficient matrix:

The third row now says z = 4. Backsubstituting this value into the second row gives y = 1, and backsubstitution of both these values into the first row yields x = 2. The solution of this system is therefore (x, y, z) = (2, 1, 4). GaussJordan elimination. Gaussian elimination proceeds by performing elementary row operations to produce zeros below the diagonal of the coefficient matrix to reduce it to echelon form. (Recall that a matrix A = [aij ] is in echelon form when aij = 0 for i > j, any zero rows appear at the bottom of the matrix, and the first nonzero entry in any row is to the right of the first nonzero entry in any higher row.) Once this is done, inspection of the bottom row(s) and backsubstitution into the upper rows determine the values of the unknowns. However, it is possible to reduce (or eliminate entirely) the computations involved in backsubstitution by performing additional row operations to transform the matrix from echelon form to reduced echelon form. A matrix is in reduced echelon form when, in addition to being in echelon form, each column that contians a nonzero entry (usually made to be 1) has zeros not just below that entry but also above that entry. Loosely speaking, Gaussian elimination works from the top down, to produce a matrix in echelon form, whereas GaussJordan elimination continues where Gaussian left off by then working from the bottom up to produce a matrix in reduced echelon form. The technique will be illustrated in the following example. Example 5: The height, y, of an object thrown into the air is known to be given by a quadratic function of t (time) of the form y = at2 + bt + c. If the object is at height y = 23/4 at time t = 1/2, at y = 7 at time t = 1, and at y = 2 at t = 2, determine the coefficients a, b, and c. Since t = 1/2 gives y = 23/4

Page 15 of 27

while the other two conditions, y(t = 1) = 7 and y(t = 2) = 2, give the following equations for a, b, and c:

Therefore, the goal is solve the system

The augmented matrix for this system is reduced as follows:

At this point, the forward part of Gaussian elimination is finished, since the coefficient matrix has been reduced to echelon form. However, to illustrate GaussJordan elimination, the following additional elementary row operations are performed:

This final matrix immediately gives the solution: a = 5, b = 10, and c = 2. Example 6: Solve the following system using Gaussian elimination:

Page 16 of 27

The augmented matrix for this system is

Multiples of the first row are added to the other rows to produce zeros below the first entry in the first column:

Next, 1 times the second row is added to the third row:

The third row now says 0 x + 0 y + 0 z = 1, an equation that cannot be satisfied by any values of x, y, and z. The process stops: this system has no solutions. The previous example shows how Gaussian elimination reveals an inconsistent system. A slight alteration of that system (for example, changing the constant term 7 in the third equation to a 6) will illustrate a system with infinitely many solutions. Example 7: Solve the following system using Gaussian elimination:

The same operations applied to the augment matrix of the system in Example 6 are applied to the augmented matrix for the present system: Page 17 of 27

Here, the third row translates into 0 x + 0 y + 0 z = 0, an equation which is satisfied by any x, y, and z. Since this offer no constraint on the unknowns, there are not three conditions on the unknowns, only two (represented by the two nonzero rows in the final augmented matrix). Since there are 3 unknowns but only 2 constrants, 3 2 =1 of the unknowns, z say, is arbitrary; this is called a free variable. Let z = t, where t is any real number. Backsubstitution of z = t into the second row ( y + 5 z = 6) gives

Back substituting z = t and y = 6 + 5 t into the first row ( x + y 3 z = 4) determines x:

Therefore, every solution of the system has the form

where t is any real number. There are infinitely many solutions, since every real value of t gives a different particular solution. For example, choosing t = 1 gives ( x, y, z) = (4, 11, 1), while t = 3 gives ( x, y, z) = (4, 9, 3), and so on. Geometrically, this system represents three planes in R 3 that intersect in a line, and (*) is a parametric equation for this line. Example 7 provided an illustration of a system with infinitely many solutions, how this case arises, and how the solution is written. Every linear system that possesses infinitely many solutions must contain at least one arbitrary parameter(free variable). Once the augmented matrix has been reduced to echelon form, the number of free variables is equal to the total number of unknowns minus the number of nonzero rows:

This agrees with Theorem B above, which states that a linear system with fewer equations than unknowns, if consistent, has infinitely many solutions. The condition fewer equations than unknowns means that the number of rows in the coefficient matrix is less than the number of unknowns. Therefore, the boxed equation above implies that there must be at least one free variable. Since such a variable can, by definition, take on infinitely many values, the system will have infinitely many solutions. Page 18 of 27

Example 8: Find all solutions to the system

First, note that there are four unknwons, but only thre equations. Therefore, if the system is consistent, it is guaranteed to have infinitely many solutions, a condition characterized by at least one parameter in the general solution. After the corresponding augmented matrix is constructed, Gaussian elimination yields

The fact that only two nonzero rows remain in the echelon form of the augmented matrix means that 4 2 = 2 of the variables are free:

Therefore, selecting y and z as the free variables, let y = t 1 and z = t 2. The second row of the reduced augmented matrix implies

and the first row then gives

Thus, the solutions of the system have the form

Page 19 of 27

where t 1 t 2 are allowed to take on any real values. Example 9: Let b = ( b 1, b 2, b 3) T and let A be the matrix

For what values of b 1, b 2, and b 3 will the system A x = b be consistent? The augmented matrix for the system A x = b reads

which Gaussian eliminatin reduces as follows:

The bottom row now implies that b 1 + 3 b 2 + b 3 must be zero if this system is to be consistent. Therefore, the given system has solutins (infinitely many, in fact) only for those column vectors b = ( b 1, b 2, b 3) T for which b 1 + 3 b 2 + b 3 = 0. Example 10: Solve the following system (compare to Example 12):

A system such as this one, where the constant term on the righthand side of everyequation is 0, is called a homogeneous system. In matrix form it reads A x = 0. Since every homogeneous system is consistent because x = 0 is always a solutiona homogeneous system has eithe exactly one solution (the trivial solution, x =0) or infiitely many. The rowreduction of the coefficient matrix for this system has already been performed in Example 12. It is not necessary to explicitly augment the coefficient matrix with the column b = 0, since no elementary row operation can affect these zeros. That is, if A is an echelon form of A, then elementary row operations will transform [ A| 0] into [ A| 0]. From the results of Example 12, Page 20 of 27

Since the last row again implies that z can be taken as a free variable, let z = t, where t is any real number. Backsubstitution of z = t into the second row ( y + 5z = 0) gives

and backsubstitution of z = t and y = 5 t into the first row ( x + y 3 z = 0) determines x:

Therefore, every solution of this system has the form ( x, y, z) = (2 t, 5 t, t), where t is any real number. There are infinitely many solutins, since every real value of t gives a unique particular solution. Note carefully the differnece between the set of solutions to the system in Example 12 and the one here. Although both had the same coefficient matrix A, the system in Example 12 was nonhomogeneous ( A x = b, where b 0), while the one here is the corresponding homogeneous system, A x = 0. Placing their solutions side by side, general solution to Ax = 0: ( x, y, z) = (2 t, 5 t, t) general solution to Ax = b: ( x, y, z) = (2 t, 5 t, t) + (2, 6, 0) illustrates an important fact: Theorem C. The general solutions to a consistent nonhomogeneous lienar system,A x = b, is equal to the general solution of the corresponding homogeneous system, A x = 0, plus a particular solution of the nonhomogeneous system. That is, if x = x h represents the general solution of A x = 0, then x = x h + x represents the general solution of A x + b, where x is any particular soltion of the (consistent) nonhomogeneous system A x = b. [Technical note: Theorem C, which concerns a linear system, has a counterpart in the theory of linear diffrential equations. Let L be a linear differential operator; then the general solution of a solvable nonhomogeneous linear differential equation, L(y)= d (where d 0), is equal to the general solution of the corresponding homogeneous equation, L(y) = 0, plus a particular solution of the nonhomogeneous equation. That is, if y = y h repreents the general solution of L(y) = 0, then y = y h+ y represents the general solution of L(y) = d, where y is any particular solution of the (solvable) nonhomogeneous linear equation L(y) = d.] Example 11: Determine all solutions of the system

Write down the augmented matrix and perform the following sequence of operations:

Page 21 of 27

Since only 2 nonzero rows remain in this final (echelon) matrix, there are only 2 constraints, and, consequently, 4 2 = 2 of the unknowns y and z sayare free variables. Let y = t 1 and z = t 2. Backsubstitution of y = t 1 and z = t 2 into the second row ( x 3 y + 4 z = 1) gives

Finally, backsubstituting x = 1 + 3 t 1 4 2, y = t 1,and z = t 2 into the first row (2 w 2 x + y = 1) determines w:

Therefore, every solution of this system has the form

where t 1 and t 2 are any real numbers. Another way to write the solution is as follows:

where t1, t2 R. Example 12: Determine the general solution of

which is the homogeneous system corresponding to the nonhomoeneous one in Example 11 above. Page 22 of 27

Since the solution to the nonhomogeneous system in Example 11 is

Theorem C implies that the solution of the corresponding homogeneous system is (where t1, t2 R), which is obtained from (*) by simply discarding the particular soluttion, x = (1/2,1,0,0), of the nonhomogeneous system. Example 13: Prove Theorem A: Regardless of its size or the number of unknowns its equations contain, a linear system will have either no solutions, exactly one solution, or infinitely many solutions. Proof. Let the given linear system be written in matrix form A x = b. The theorem really comes down to tthis: if A x = b has more than one solution, then it actually has infinitely many. To establish this, let x1 and x2 be two distinct solutions of A x= b. It will now be shown that for any real value of t, the vector x1 + t(x1 x 2) is also a solution of A x = b; because t can take on infinitely many different values, the desired conclusion will follow. Since A x1 = b and A x2,

Therefore, x1 + t(x1 x2) is indeed a solution of A x = b, and the theorem is proved.

Vectors and matrices in MATLAB


The purpose of this section is to demonstrate how to create and transform vectors and matrices in MATLAB. This command creates a row vector a = [1 2 3] a= 123 Column vectors are inputted in a similar way, however, semicolons must separate the components of a vector b = [1;2;3] b= 1 2 3 The quote operator ' is used to create the conjugate transpose of a vector (matrix) while the dotquote operator .' creates the transpose vector (matrix). To illustrate this let us form a complex vector a + i*b' and next apply these operations to the resulting vector to obtain (a+i*b')' ans = 1.0000 - 1.0000i 2.0000 - 2.0000i

Page 23 of 27

3.0000 - 3.0000i while 4 (a+i*b').' ans = 1.0000 + 1.0000i 2.0000 + 2.0000i 3.0000 + 3.0000i Command length returns the number of components of a vector length(a) ans = 3 The dot operator. plays a specific role in MATLAB. It is used for the componentwise application of the operator that follows the dot operator a.*a ans = 149 The same result is obtained by applying the power operator ^ to the vector a a.^2 ans = 149 Componentwise division of vectors a and b can be accomplished by using the backslash operator \ together with the dot operator . a.\b' ans = 111 For the purpose of the next example let us change vector a to the column vector a = a' a= 1 2 3 The dot product and the outer product of vectors a and b are calculated as follows dotprod = a'*b 5 dotprod = 14 outprod = a*b' outprod = 123 246 369 The cross product of two three-dimensional vectors is calculated using command cross. Let the vector a be the same as above and let b = [-2 1 2]; Note that the semicolon after a command avoids display of the result. The cross product of a and b is cp = cross(a,b) cp = 1 -8 5 The cross product vector cp is perpendicular to both a and b [cp*a cp*b']

Page 24 of 27

ans = 00 We will now deal with operations on matrices. Addition, subtraction, and scalar multiplication are defined in the same way as for the vectors. This creates a 3-by-3 matrix A = [1 2 3;4 5 6;7 8 10] A= 123 456 7 8 10 Note that the semicolon operator ; separates the rows. To extract a submatrix B consisting of rows 1 and 3 and columns 1 and 2 of the matrix A do the following B = A([1 3], [1 2]) B= 12 78 To interchange rows 1 and 3 of A use the vector of row indices together with the colon operator C = A([3 2 1],:) 6 C= 7 8 10 456 123 The colon operator : stands for all columns or all rows. For the matrix A from the last example the following command A(:) ans = 1 4 7 2 5 8 3 6 10 creates a vector version of the matrix A. We will use this operator on several occasions. To delete a row (column) use the empty vector operator [ ] A(:, 2) = [] A= 13 46 7 10 Second column of the matrix A is now deleted. To insert a row (column) we use the technique for creating matrices and vectors A = [A(:,1) [2 5 8]' A(:,2)] A= 123 456 7 8 10 Matrix A is now restored to its original form. Using MATLAB commands one can easily extract those entries of a matrix that satisfy an impsed condition. Suppose that one wants to extract all entries of that are greater than one. First, we

Page 25 of 27

define a new matrix A A = [-1 2 3;0 5 1] A= -1 2 3 051 7 Command A > 1 creates a matrix of zeros and ones A>1 ans = 011 010 with ones on these positions where the entries of A satisfy the imposed condition and zeros everywhere else. This illustrates logical addressing in MATLAB. To extract those entries of the matrix A that are greater than one we execute the following command A(A > 1) ans = 2 5 3 The dot operator . works for matrices too. Let now A = [1 2 3; 3 2 1] ; The following command A.*A ans = 149 941 computes the entry-by-entry product of A with A. However, the following command A*A ??? Error using ==> * Inner matrix dimensions must agree. generates an error message. Function diag will be used on several occasions. This creates a diagonal matrix with the diagonal entries stored in the vector d d = [1 2 3]; D = diag(d) D= 100 020 003 8 To extract the main diagonal of the matrix D we use function diag again to obtain d = diag(D) d= 1 2 3 What is the result of executing of the following command? diag(diag(d)); In some problems that arise in linear algebra one needs to calculate a linear combination of several matrices of the same dimension. In order to obtain the desired combination both the coefficients and the matrices must be stored in cells. In MATLAB a cell is inputted using curly braces{ }. This c = {1,-2,3}

Page 26 of 27

c= [1] [-2] [3] is an example of the cell. Function lincomb will be used later on in this tutorial. function M = lincomb(v,A) % Linear combination M of several matrices of the same size. % Coefficients v = {v1,v2,,vm} of the linear combination and the % matrices A = {A1,A2,...,Am} must be inputted as cells. m = length(v); [k, l] = size(A{1}); M = zeros(k, l); for i = 1:m M = M + v{i}*A{i}; end

Page 27 of 27

You might also like