Professional Documents
Culture Documents
Krishna Kaipa
January 2, 2012
Krishna Kaipa
Course Plan
Week 1: Matrix Mechanics. Solving Linear equations Weeks 2, 3, 4: Linear Algebra: What is the Physics/Geometry behind all those Matrix manipulations? Vector spaces and Linear Transformations Week 5: Determinants: What is the geometry behind that algebraic gadget known as determinant? Week 6: Dot Product generalized, Orthogonality Week 7: Eigenvalues, Eigenvectors and Diagonalization of Symmetric Matrices Applications will be interspersed.
Krishna Kaipa
As per the Senate rule, the attendance in the rst week is Compulsory; failing to which you will be dis-enrolled from the course. Starting second week, there will be weekly 10 minute quizzes of 10 marks at the beginning of each tutorial. It will have 2 questions; one from tutorial sheet and the second from the material covered in the class during previous week. Mid semester exam will be of 50 marks. The textbook Introduction to linear algebra (2nd edition) by Serge Lang will be followed for the course. Indian edition of this book is available.
3 4
Krishna Kaipa
Matrix Terminology
For integers m, n > 0, an m n matrix A is an array of numbers (mostly real, sometimes complex) a11 a12 . . . a1n a21 a22 . . . a2n A= . . . . . . . . ... . am1 am2 . . . amn We write A = (aij ), where the row index i runs from 1 to m, and, the column index j runs from 1 to n. aij itself is called the ij-th entry of A. Sometimes we will use the notation A:,j for the j-th column of A, and Ai,: for the i-th row.
Krishna Kaipa
Clearly the rows/columns of A become the columns/rows of At . Clearly taking the transpose twice does nothing to a matrix . A symmetric matrix is a matrix which equals its transpose. (It has to be a square matrix) Using the transpose notation, a column vector of size n 1 can be written as (x1 , , xn )t
Krishna Kaipa MA 106 Linear Algebra: Lecture 1
Multiplication of Matrices
Recall that the dot product of two column vectors (of length n), X = (x1 , , xn )t and Y = (y1 , , yn )t is the scalar
n
X Y =
i=1
Xi Yi
Given a matrix A of size m n, and a matrix B of size n p we dene the product AB to be the m p matrix whose ij-th entry is the dot product of the i-th column of At with the j-th col. of B. We see that the denition is not symmetric in A and B, so that BA need not equal AB even if they have the same size. Let us put down the formula for the ij-th entry of AB
n
(AB)ij =
k=1
aik bkj .
Krishna Kaipa
Writing B = [B1 , B2 , , Bp ] where bi are the coulumns of the n p matrix B. We claim: AB = [AB1 , AB2 , , ABp ] To see this, observe that the ij-the entry of l.h.s is by defn., the dot product of row i of A with Bj . But the same dot product is also the i-th entry of ABj .
We claim that distributive law holds for matrix products: A(B + C ) = AB + AC To prove this we just have to show that the j-th entries of l.h.s and r.h.s agree. Let u be the (transpose of) the i-th row of A, let v , w be the j-th columns of B, C resp. Then, we must show u (v + w ) = u v + u w , and this we borrow from the distributive law for dot products.
Krishna Kaipa MA 106 Linear Algebra: Lecture 1
Examples
Let A = AB = 2 1 1 0 3 1 1 6 0 2 and B = 2 1 1 2. Then 2 0 1 1
2 11 2 1 . (and BA cannot be dened) 8 3 2 5 Consider a system of two linear equations in three unknowns 2x + y z 3y + z
2 0 1 3
= 1 = 5
1 1 1 5
which we can cast as a matrix equation A x = b where A matrix is as in example above, the vector x = (x, y , z)t and b = (1, 5)t . It is clear that m linear equations in n unknowns can be cast as A x = b where A is m n and x and b are column vectors of length n and m, resp.
Krishna Kaipa MA 106 Linear Algebra: Lecture 1
Question Take a vector v = (x, y )t R2 . How does v R() v transform v ? [Also answer: what is the product R() R() ?] Answer It rotates v by radians anti-clockwise. One way to see this, is to use complex notation. (If you dont want complex notation, simply use polar coordinates) Set z = x + y , then the vector x cos sin R() =x +y y sin cos which in complex notation is xe + y e = ze Now, well do something weird. We will let the rotation angle become imaginary, set = where = 1 and is real. We make sense out of this as follows. Fact: The power series e z = 1 + z + z 2 /2! + z 3 /3! + converges for all complex numbers. For a real number , Eulers formula is:
Krishna Kaipa MA 106 Linear Algebra: Lecture 1
x y
1 2 1 2
Krishna Kaipa
continued
Problem: Determine gij and prove G is Markov (with all gij > 0). Answer: If Out(j) = , then gij = 1/n. If Out(j) = , and i Out(j), then gij = 0.85/#Out(j) + 0.15/n If Out(j) = , and if i Out(j) then gij = 0.15/n. / Check that each column sum is 1. Question What is the probability that RS lands at page i after 2 clicks from page j. Generalize for 2 replaced with m Answer: ij-th entry of G 2 , G m . To see this, we observe that the desired probability. is the sum over all pages k,of the probability of going from j k times, the probability of going from k i. In 2 other words n gik gkj (which is Gij ) k=1 The j-th column of G m is just G m ej . As m , the fact on previous frame implies that all columns of G will converge to a xed v .It follows that the probability of reaching page i after m clicks is virtually (v )i for all suciently large m (independent of the starting page).
Krishna Kaipa MA 106 Linear Algebra: Lecture 1
continued
The pagerank of page i is dened to be (v )i Interpretation: Pick m very large such that the columns of G m are virtually equal to v . Let RS start surng. Consider the n pages RS visits between the m-th and m + n-th clicks, the fraction of visits to page i will tend to v (i) as n (intuitively believable, but also provable by using a theorem in probability theory called the law of large numbers). It is reasonable to say that page i has higher rank if this fraction is large. Alternatively: If we set a large number of random surfers (with identical strategy) to task, then the fraction of these surfers who will be at page i after m clicks (for all m suciently large) is virtually (v )i . (intuitively believable by the relative frequency interpretation of probability, and provable by law of large numbers). The larger this fraction, the more important we consider page i to be. Hence using this as page ranking is reasonable.
Krishna Kaipa MA 106 Linear Algebra: Lecture 1