Linear Algebra Concepts

  1. Vector Spaces

  1. Rank of matrix is the number of non zero rows in REF form.
  2. Alternatively rank of matrix is = total number of columns - total number of dependent columns.
  3. In the REF form, If a column of a matrix can be expressed in terms of other columns then it is a dependent vector. The vectors on the left most side with leading 1 at the col number position ( 1 at 1st row 1st col, 1 at 2nd row 2nd col, likewise) then such columns are linearly independent columns. For linerarly independence a1v1+a2v2+a3v3=0 where v1,v2,v3 are vectors and a1,a2,a3 are scalars.
  4. Span of a vector space is all the possible vectors which can be formed using the basis vectors.
  5. Basis vectors are linearly independent vectors which can span the vector space.
  6. Column space is all possible set of vectors formed by the column vectors.
  7. Row space is all possible set of vectors formed by the row vectors.
  8. Dimension of a row space or column space is equal to its rank.
  9. Null space or kernel of a matrix is any vector x which multiplied with the matrix A is zero. (Ax=0)
  10. Rank of a matrix A and sum of its null space is equal to the number of columns of the matrix. Rank(A)+ Null(A) = n ( number of cols of matrix)
  11. Vector subspace is subset of the vector space and has a null vector
  12. Vectors can be transformed using basis.

Distance measurement for  vectors and matrices

  1. Frobenius norm of a matrix is square root of sum of all the elements of the matrix. Used to find distance of a matrix from the zero matrix.
  2. Eucledian norm (||x||) of a vector is square root of sum of all the elements of the vector.
  3. Manhattan norm of a vector is sum of absolute value of all the elements of the vector.
  4. Inner product between 2 vectors U and V is UTV.
  5. Likewise length of vector V is VTV
  6. Normalizing a vector or unit vector means dividing the vector by its magnitude. U/||U||.
  7. Cauchy Schwarz Inequality - Inner product of vectors x and y defined as <x,y> is less than or equal to the product of the norm of vectors x and y.
    1. <x,y> <= ||x||.||y||

Orthogonality

  1. When inner product between 2 vectors is 0 then they are Orthogonal to each other.
  2. Normalized representation of orthogonal vector is called Orthonormal vector.

Type of matrices

  1. Symmetric matrix is one where the transform is same as the matrix ( A = AT)
  2. A Gram matrix is one formed as product of the matrix A and its transpose. G = AAT
  3. A diagonally dominant matrix is a square matrix where the absolute value of the diagonal element in each row is greater than or equal to the sum of the absolute values of all the other elements in that row.
  4. Matrices A and B are similar if their rank and determinants match and they have the same eigen value.
  5. Triangular matrix is one in which all the elements above or below the leading diagonal is zero. 
  6. If all the elements above leading diagonal is zero then it is called upper triangular matrix. Can be codified as A(i,j) = 0 where i>j.
  7. The diagonal entries of the triangular matrix are its eigen values.
  8. A matrix A is diagonalizable matrix if it can be represented as a product of 3 matrices (1) D - the matrix whose diagonal elements are eigen values of A and the other elements are zero. (2) P - the matrix formed by the eigen vectors of A. (3) P-1  --> A = PDP-1
  9. A matrix A is a Hermitian matrix it it has complex numbers and the Complex conjugate of A is equal to its transpose.
    1. Complex conjugate for -3+2i is -3-2i - Only sign of complex part is changed.
    2. The diagonal always has real numbers.

Symmetric Positive Definite Matrix and its properties

  1. A matrix is Positive definite matrix when all its eigen values are (1) real  (2) greater than zero and its a diagonally dominant matrix. Key condition to be satisfied is xTAx > 0 
  2. Every symmetric positive definite matrix can be decomposed into the product of a lower triangular matrix and its transpose , known as the Cholesky decomposition.
    1. A = LLT
  3. They can be orthogonally diagonalized (
4. Positive semi definite matrix means matrix with non negative eigen values but can be zero as well.

Eigen Values and Vectors

  1. For a N by N matrix A, a vector v is called its Eigen vector if it can be represented as product of a scalar and the same vector v.
  2. Av = kv where k is any scalar which is called the Eigen value.
  3. Sum of all the eigen values is called the trace of the matrix or tr(A).
  4. The product of all the eigen values is called the determinant of the matrix.
  5. To find the eigen values of the matrix A we need to find the roots of the characteristic equation A - kI = 0 where k is the eigen value which need to be found and I is the identity matrix.
  6. If same eigen value is repeated more than once then the algebraic multiplicity is equal to the number of times it gets repeated.
  7. The set of eigen values of A are also referred as Spectrum of A.
  8. A Nil Potent matrix is a special matrix where all its eigen values are zero or either all except the last eigen value is zero. 
    1. Let l1,l2,..ln be the eigen values where l1=l2=l3=ln=0 or l1=l2=l3=0 except ln<>0.
    2. Also when you raise the matrix to a positive power it becomes zero matrix.
      1. A^k = 0 
      2. Say if A has order of 4 and A^2 becomes 0 then Rank = 4 -2 

Comments

Popular posts from this blog

AWS Organizations, IAM

Key Concepts