Skip to main content

Matrix Eigenvalues & Eigenvectors

The process of finding an unknown scalar, λ and a nonzero vector, x for a given non zero square matrix, A of dimension n x n is called matrix eigenvalue or eigenvalue.

Ax = λ x

The λ and x which satisfies the above equation is called eigen value and eigenvector.

Ax should be proportional to x. The multiplication will produce a new vector that will have the same or opposite direction as the original vector.

The set of all the eigenvalues of A is called the spectrum of A. The largest of the absolute values of the eigenvalues of A is called the spectral radius of A.

To determine eigenvalue and eigenvector,

the equation can be written in matrix notation,

(A - λI)x = 0

By Cramer's theorem,  the homogeneous linear system of equations has a nontrivial solution if and only if the corresponding determinant of the coefficients is zero.





A - λI is called characteristic matrix and D(λ) is characteristic determinant of A. The above equation is called characteristic equation of A.

The eigenvalues of a square matrix A are the roots of the characteristic equation of A.

Hence an n x n matrix has at least one eigenvalue and at most n numerically different eigenvalues.

The eigenvalues must be determined first and its corresponding eigenvectors are obtained from the system.

The sum of the eigenvalues of A equals the sum of the entries on the main diagonal of A, called the trace of A.

and the product of the eigenvalues equals the determinant of A,

The eigenvalues of Hermitian matrices are real. 
The eigenvalues of skew-Hermitian matrices are pure imaginary or 0. 
The eigenvalues of unitary matrices have absolute value 1.

Popular posts from this blog

Gaussian Elimination - Row reduction Algorithm

 Gaussian elimination is a method for solving matrix equations of the form, Ax=b.  This method is also known as the row reduction algorithm. Back  Substitution Solving the last equation for the variable and then work backward into the first equation to solve it.  The fundamental idea is to add multiples of one equation to the others in order to eliminate a variable and to continue this process until only one variable is left. Pivot row The row that is used to perform elimination of a variable from other rows is called the pivot row. Example: Solving a linear equation The augmented matrix for the above equation shall be The equation shall be solved using back substitution. The eliminating the first variable (x1) in the first row (Pivot row) by carrying out the row operation. As the second row become zero, the row will be shifted to bottom by carrying out partial pivoting. Now, the second variable (x2)  shall be eliminated by carrying out the row operation again. ...

Decision Tree - Gini Index

The Gini index is used in CART. The Gini index measures the impurity of the data set, where p i - probability that data in the data set, D belong to class, C i  and pi = |C i,D |/|D| There are 2 v - 2 possible ways to form two partitions of the data set, D based on a binary split on a attribute. Each of the possible binary splits of the attribute is considered. The subset that gives the minimum Gini index is selected as the splitting subset for discrete valued attribute. The degree of Gini index varies between 0 and 1. The value 0 denotes that all elements belong to a certain class or if there exists only one class, and the value 1 denotes that the elements are randomly distributed across various classes. A Gini Index of 0.5 denotes equally distributed elements into some classes. The Gini index is biased toward multivalued attributes and has difficulty when the number of classes is large.

Exercise 1 - Amdahl's Law

A programmer is given the job to write a program on a computer with processor having speedup factor 3.8 on 4 processors. He makes it 95% parallel and goes home dreaming of a big pay raise. Using Amdahl’s law, and assuming the problem size is the same as the serial version, and ignoring communication costs, what is the speedup factor that the programmer will get? Solution Speedup formula as per Amdahl's Law, N - no of processor = 4 f - % of parallel operation = 95% Speedup = 1 / (1 - 0.95) + (0.95/4) = 1 / 0.5 + (0.95/4) Speedup = 3.478 The programmer gets  3.478 as t he speedup factor.