Skip to main content

Gaussian Elimination - Row reduction Algorithm

 Gaussian elimination is a method for solving matrix equations of the form, Ax=b. This method is also known as the row reduction algorithm.

Back Substitution

Solving the last equation for the variable and then work backward into the first equation to solve it. The fundamental idea is to add multiples of one equation to the others in order to eliminate a variable and to continue this process until only one variable is left.

Pivot row

The row that is used to perform elimination of a variable from other rows is called the pivot row.

Example:

Solving a linear equation

The augmented matrix for the above equation shall be


The equation shall be solved using back substitution. The eliminating the first variable (x1) in the first row (Pivot row) by carrying out the row operation.

As the second row become zero, the row will be shifted to bottom by carrying out partial pivoting.


Now, the second variable (x2)  shall be eliminated by carrying out the row operation again. Then the variable (x3) can be determined now with the row reduced matrix, by which other variable (x1, x2) can be calculated. The value of variables are

x3 = 2 ; x2 = 4 ; x1 = 2

The same operation can be done with a single command, Row reduced echelon form (rref) in  Matlab or Octave as follows.


The system is overdetermined, if the equations are more than unknowns; determined, if equations are equal to unknowns; underdetermined, if equations are lesser than unknowns.

The system is called consistent, if it has at least on solution and inconsistent, if it doesn't have any solution.

Popular posts from this blog

Exercise 2 - Amdahl's Law

A programmer has parallelized 99% of a program, but there is no value in increasing the problem size, i.e., the program will always be run with the same problem size regardless of the number of processors or cores used. What is the expected speedup on 20 processors? Solution As per Amdahl's law, the speedup,  N - No of processors = 20 f - % of parallel operation = 99% = 1 / (1 - 0.99) + (0.99 / 20) = 1 / 0.01 + (0.99 / 20) = 16.807 The expected speedup on 20 processors is 16.807

Exercise 1 - Amdahl's Law

A programmer is given the job to write a program on a computer with processor having speedup factor 3.8 on 4 processors. He makes it 95% parallel and goes home dreaming of a big pay raise. Using Amdahl’s law, and assuming the problem size is the same as the serial version, and ignoring communication costs, what is the speedup factor that the programmer will get? Solution Speedup formula as per Amdahl's Law, N - no of processor = 4 f - % of parallel operation = 95% Speedup = 1 / (1 - 0.95) + (0.95/4) = 1 / 0.5 + (0.95/4) Speedup = 3.478 The programmer gets  3.478 as t he speedup factor.

Minor, Cofactor, Determinant, Adjoint & Inverse of a Matrix

Consider a matrix Minor of a Matrix I n the above matrix A, the minor of first element a 11  shall be Cofactor The Cofactor C ij  of an element a ij shall be When the sum of row number and column number is even, then Cofactor shall be positive, and for odd, Cofactor shall be negative. The determinant of an n x n matrix can be defined as the sum of multiplication of the first row element and their respective cofactors. Example, For a 2 x 2 matrix Cofactor C 11 = m 11 = | a 22 | = a 22  = 2 Determinant The determinant of A is  |A| = (3 x 2) - (1 x 1) = 5 Adjoint or Adjucate The Adjoint matrix of A , adjA is the transpose of its cofactor matrix. Inverse Matrix A matrix should be square matrix to have an inverse matrix and also its determinant should not be zero. The multiplication of matrix and its inverse shall be Identity matrix. The square matrix has no inverse is called Singular. Inv A = adjA / |A|           [ adjoint A / determ...