Skip to main content

Matrix Eigenvalues & Eigenvectors

The process of finding an unknown scalar, λ and a nonzero vector, x for a given non zero square matrix, A of dimension n x n is called matrix eigenvalue or eigenvalue.

Ax = λ x

The λ and x which satisfies the above equation is called eigen value and eigenvector.

Ax should be proportional to x. The multiplication will produce a new vector that will have the same or opposite direction as the original vector.

The set of all the eigenvalues of A is called the spectrum of A. The largest of the absolute values of the eigenvalues of A is called the spectral radius of A.

To determine eigenvalue and eigenvector,

the equation can be written in matrix notation,

(A - λI)x = 0

By Cramer's theorem,  the homogeneous linear system of equations has a nontrivial solution if and only if the corresponding determinant of the coefficients is zero.





A - λI is called characteristic matrix and D(λ) is characteristic determinant of A. The above equation is called characteristic equation of A.

The eigenvalues of a square matrix A are the roots of the characteristic equation of A.

Hence an n x n matrix has at least one eigenvalue and at most n numerically different eigenvalues.

The eigenvalues must be determined first and its corresponding eigenvectors are obtained from the system.

The sum of the eigenvalues of A equals the sum of the entries on the main diagonal of A, called the trace of A.

and the product of the eigenvalues equals the determinant of A,

The eigenvalues of Hermitian matrices are real. 
The eigenvalues of skew-Hermitian matrices are pure imaginary or 0. 
The eigenvalues of unitary matrices have absolute value 1.

Popular posts from this blog

Exercise 2 - Amdahl's Law

A programmer has parallelized 99% of a program, but there is no value in increasing the problem size, i.e., the program will always be run with the same problem size regardless of the number of processors or cores used. What is the expected speedup on 20 processors? Solution As per Amdahl's law, the speedup,  N - No of processors = 20 f - % of parallel operation = 99% = 1 / (1 - 0.99) + (0.99 / 20) = 1 / 0.01 + (0.99 / 20) = 16.807 The expected speedup on 20 processors is 16.807

Exercise 1 - Amdahl's Law

A programmer is given the job to write a program on a computer with processor having speedup factor 3.8 on 4 processors. He makes it 95% parallel and goes home dreaming of a big pay raise. Using Amdahl’s law, and assuming the problem size is the same as the serial version, and ignoring communication costs, what is the speedup factor that the programmer will get? Solution Speedup formula as per Amdahl's Law, N - no of processor = 4 f - % of parallel operation = 95% Speedup = 1 / (1 - 0.95) + (0.95/4) = 1 / 0.5 + (0.95/4) Speedup = 3.478 The programmer gets  3.478 as t he speedup factor.

Minor, Cofactor, Determinant, Adjoint & Inverse of a Matrix

Consider a matrix Minor of a Matrix I n the above matrix A, the minor of first element a 11  shall be Cofactor The Cofactor C ij  of an element a ij shall be When the sum of row number and column number is even, then Cofactor shall be positive, and for odd, Cofactor shall be negative. The determinant of an n x n matrix can be defined as the sum of multiplication of the first row element and their respective cofactors. Example, For a 2 x 2 matrix Cofactor C 11 = m 11 = | a 22 | = a 22  = 2 Determinant The determinant of A is  |A| = (3 x 2) - (1 x 1) = 5 Adjoint or Adjucate The Adjoint matrix of A , adjA is the transpose of its cofactor matrix. Inverse Matrix A matrix should be square matrix to have an inverse matrix and also its determinant should not be zero. The multiplication of matrix and its inverse shall be Identity matrix. The square matrix has no inverse is called Singular. Inv A = adjA / |A|           [ adjoint A / determ...