Skip to main content

Data Mining & Knowledge Discovery

Data mining is a technology that blends traditional data analysis methods with sophisticated algorithms for processing large volumes of data. Data mining is the process of automatically discovering useful information in large data repositories. Data mining is an integral part of knowledge discovery in databases (KDD), which is the overall process of converting raw data into useful information.

Data mining techniques can be used to support a wide range of business intelligence applications such as customer profiling, targeted marketing, workflow management, store layout, and fraud detection.

Looking up individual records using a database management system or finding particular Web pages via a query to an Internet search engine are not data mining, but information retrieval tasks. Data mining techniques have been used to enhance information retrieval systems.

The data mining process consists of a series of transformation steps, from data preprocessing to postprocessing of data mining results.

Data Preprocessing

The purpose of preprocessing is to transform the raw input data into an appropriate format for subsequent analysis.

  • Feature selection
  • Dimensionality reduction
  • Normalization
  • Data Sub setting

Data Postprocessing

  • Filtering patterns
  • Visualization
  • Pattern interpretation

Practical difficulties encountered by traditional data analysis techniques are

  • Scalability
  • High Dimensionality
  • Heterogeneous & Complex Data
  • Data Ownership & Distribution
  • Non traditional analysis

Popular posts from this blog

Exercise 1 - Amdahl's Law

A programmer is given the job to write a program on a computer with processor having speedup factor 3.8 on 4 processors. He makes it 95% parallel and goes home dreaming of a big pay raise. Using Amdahl’s law, and assuming the problem size is the same as the serial version, and ignoring communication costs, what is the speedup factor that the programmer will get? Solution Speedup formula as per Amdahl's Law, N - no of processor = 4 f - % of parallel operation = 95% Speedup = 1 / (1 - 0.95) + (0.95/4) = 1 / 0.5 + (0.95/4) Speedup = 3.478 The programmer gets  3.478 as t he speedup factor.

Exercise 2 - Amdahl's Law

A programmer has parallelized 99% of a program, but there is no value in increasing the problem size, i.e., the program will always be run with the same problem size regardless of the number of processors or cores used. What is the expected speedup on 20 processors? Solution As per Amdahl's law, the speedup,  N - No of processors = 20 f - % of parallel operation = 99% = 1 / (1 - 0.99) + (0.99 / 20) = 1 / 0.01 + (0.99 / 20) = 16.807 The expected speedup on 20 processors is 16.807

Gaussian Elimination - Row reduction Algorithm

 Gaussian elimination is a method for solving matrix equations of the form, Ax=b.  This method is also known as the row reduction algorithm. Back  Substitution Solving the last equation for the variable and then work backward into the first equation to solve it.  The fundamental idea is to add multiples of one equation to the others in order to eliminate a variable and to continue this process until only one variable is left. Pivot row The row that is used to perform elimination of a variable from other rows is called the pivot row. Example: Solving a linear equation The augmented matrix for the above equation shall be The equation shall be solved using back substitution. The eliminating the first variable (x1) in the first row (Pivot row) by carrying out the row operation. As the second row become zero, the row will be shifted to bottom by carrying out partial pivoting. Now, the second variable (x2)  shall be eliminated by carrying out the row operation again. ...