Skip to main content

Data Integration Process

 Data mining often requires data integration, the merging of data from multiple data source.

Entity Identification Problem

When matching attributes from one database to another during integration, special attention must be paid to the structure of the data. This is to ensure that any attribute functional dependencies and referential constraints in the source system match those in the target system.

Redundancy and Correlation Analysis

Some redundancies can be detected by correlation analysis.

For nominal data, we use the X2 (chi-square) test. For numeric attributes, the correlation coefficient and covariance can be used.

Chi-Square Correlation Test (Pearson Statistic Test)

where 

oij is the observed frequency (actual count)

eij is the expected frequency

where

n is the number of data tuples

If there is no correlation between A & B, then they are independent. The cells that contribute the most to the chi-square value are those for which the actual count is very different from that expected.

Correlation Coefficient for Numeric data (Pearson’s product moment coefficient)

where

n - number of tuples

ai & bi respective values of A & B in tuple i

A & B - mean value of A & B

oA & oB - standard deviation of A & B

If rA,B is greater than 0, then A & B are positively correlated. Higher the value, the stronger the correlation.

If the value is equal to zero, then A and B are independent and no correlation.

If the value is less than zero, then A & B are negatively correlated.

Covariance of Numeric data

The mean values of A and B, respectively, are also known as the expected values on A and B, that is,

The covariance between A and B is defined as
If we compare rA,B (correlation coefficient) with covariance,
For two attributes A and B that tend to change together, if A is larger than the expected value of A, then B is likely to be larger than the expected value of B. Therefore, the covariance between A and B is positive. 
On the other hand, if one of the attributes tends to be above its expected value when the other attribute is below its expected value, then the covariance of A and B is negative.

Popular posts from this blog

Gaussian Elimination - Row reduction Algorithm

 Gaussian elimination is a method for solving matrix equations of the form, Ax=b.  This method is also known as the row reduction algorithm. Back  Substitution Solving the last equation for the variable and then work backward into the first equation to solve it.  The fundamental idea is to add multiples of one equation to the others in order to eliminate a variable and to continue this process until only one variable is left. Pivot row The row that is used to perform elimination of a variable from other rows is called the pivot row. Example: Solving a linear equation The augmented matrix for the above equation shall be The equation shall be solved using back substitution. The eliminating the first variable (x1) in the first row (Pivot row) by carrying out the row operation. As the second row become zero, the row will be shifted to bottom by carrying out partial pivoting. Now, the second variable (x2)  shall be eliminated by carrying out the row operation again. ...

Exercise 2 - Amdahl's Law

A programmer has parallelized 99% of a program, but there is no value in increasing the problem size, i.e., the program will always be run with the same problem size regardless of the number of processors or cores used. What is the expected speedup on 20 processors? Solution As per Amdahl's law, the speedup,  N - No of processors = 20 f - % of parallel operation = 99% = 1 / (1 - 0.99) + (0.99 / 20) = 1 / 0.01 + (0.99 / 20) = 16.807 The expected speedup on 20 processors is 16.807

Minor, Cofactor, Determinant, Adjoint & Inverse of a Matrix

Consider a matrix Minor of a Matrix I n the above matrix A, the minor of first element a 11  shall be Cofactor The Cofactor C ij  of an element a ij shall be When the sum of row number and column number is even, then Cofactor shall be positive, and for odd, Cofactor shall be negative. The determinant of an n x n matrix can be defined as the sum of multiplication of the first row element and their respective cofactors. Example, For a 2 x 2 matrix Cofactor C 11 = m 11 = | a 22 | = a 22  = 2 Determinant The determinant of A is  |A| = (3 x 2) - (1 x 1) = 5 Adjoint or Adjucate The Adjoint matrix of A , adjA is the transpose of its cofactor matrix. Inverse Matrix A matrix should be square matrix to have an inverse matrix and also its determinant should not be zero. The multiplication of matrix and its inverse shall be Identity matrix. The square matrix has no inverse is called Singular. Inv A = adjA / |A|           [ adjoint A / determ...