Skip to main content

Data Integration Process

 Data mining often requires data integration, the merging of data from multiple data source.

Entity Identification Problem

When matching attributes from one database to another during integration, special attention must be paid to the structure of the data. This is to ensure that any attribute functional dependencies and referential constraints in the source system match those in the target system.

Redundancy and Correlation Analysis

Some redundancies can be detected by correlation analysis.

For nominal data, we use the X2 (chi-square) test. For numeric attributes, the correlation coefficient and covariance can be used.

Chi-Square Correlation Test (Pearson Statistic Test)

where 

oij is the observed frequency (actual count)

eij is the expected frequency

where

n is the number of data tuples

If there is no correlation between A & B, then they are independent. The cells that contribute the most to the chi-square value are those for which the actual count is very different from that expected.

Correlation Coefficient for Numeric data (Pearson’s product moment coefficient)

where

n - number of tuples

ai & bi respective values of A & B in tuple i

A & B - mean value of A & B

oA & oB - standard deviation of A & B

If rA,B is greater than 0, then A & B are positively correlated. Higher the value, the stronger the correlation.

If the value is equal to zero, then A and B are independent and no correlation.

If the value is less than zero, then A & B are negatively correlated.

Covariance of Numeric data

The mean values of A and B, respectively, are also known as the expected values on A and B, that is,

The covariance between A and B is defined as
If we compare rA,B (correlation coefficient) with covariance,
For two attributes A and B that tend to change together, if A is larger than the expected value of A, then B is likely to be larger than the expected value of B. Therefore, the covariance between A and B is positive. 
On the other hand, if one of the attributes tends to be above its expected value when the other attribute is below its expected value, then the covariance of A and B is negative.

Popular posts from this blog

Decision Tree - Gini Index

The Gini index is used in CART. The Gini index measures the impurity of the data set, where p i - probability that data in the data set, D belong to class, C i  and pi = |C i,D |/|D| There are 2 v - 2 possible ways to form two partitions of the data set, D based on a binary split on a attribute. Each of the possible binary splits of the attribute is considered. The subset that gives the minimum Gini index is selected as the splitting subset for discrete valued attribute. The degree of Gini index varies between 0 and 1. The value 0 denotes that all elements belong to a certain class or if there exists only one class, and the value 1 denotes that the elements are randomly distributed across various classes. A Gini Index of 0.5 denotes equally distributed elements into some classes. The Gini index is biased toward multivalued attributes and has difficulty when the number of classes is large.

Gaussian Elimination - Row reduction Algorithm

 Gaussian elimination is a method for solving matrix equations of the form, Ax=b.  This method is also known as the row reduction algorithm. Back  Substitution Solving the last equation for the variable and then work backward into the first equation to solve it.  The fundamental idea is to add multiples of one equation to the others in order to eliminate a variable and to continue this process until only one variable is left. Pivot row The row that is used to perform elimination of a variable from other rows is called the pivot row. Example: Solving a linear equation The augmented matrix for the above equation shall be The equation shall be solved using back substitution. The eliminating the first variable (x1) in the first row (Pivot row) by carrying out the row operation. As the second row become zero, the row will be shifted to bottom by carrying out partial pivoting. Now, the second variable (x2)  shall be eliminated by carrying out the row operation again. ...

Exercise 1 - Amdahl's Law

A programmer is given the job to write a program on a computer with processor having speedup factor 3.8 on 4 processors. He makes it 95% parallel and goes home dreaming of a big pay raise. Using Amdahl’s law, and assuming the problem size is the same as the serial version, and ignoring communication costs, what is the speedup factor that the programmer will get? Solution Speedup formula as per Amdahl's Law, N - no of processor = 4 f - % of parallel operation = 95% Speedup = 1 / (1 - 0.95) + (0.95/4) = 1 / 0.5 + (0.95/4) Speedup = 3.478 The programmer gets  3.478 as t he speedup factor.