Skip to main content

Data Mining Tasks

Data mining tasks are generally divided into

Predictive tasks

to predict the value of a particular attribute based on the values of other attributes.

Attribute to be predicted - target or dependent variables.

Attributes used for making the prediction -  explanatory or independent variables

Descriptive tasks

to derive patterns (correlations, trends, clusters, trajectories, and anomalies) that summarize the underlying relationships in data.

It is exploratory in nature

It requires postprocessing techniques to validate and explain the results

Core data mining tasks

Predictive Modeling

Association Analysis

Cluster Analysis

Anomaly Detection


Predictive Modeling

task of building a model for the target variable as a function of the explanatory variables.

Two types of predictive tasks

Regression

used for continuous target variables

Classification

used for discrete/categorical target variables

Example:

Predicting disease of a patient

Association Analysis

used to discover patterns that describe strongly associated features in the data.

Example

Identifying products bought together

Cluster Analysis

used to identify groups of closely related observations

Example

Grouping articles to the related topics

Anomaly Detection

task of identifying observations whose characteristics are significantly different from the rest of the data.

A good anomaly detector must have a high detection rate and a low false alarm rate.

Example

Detecting credit card fraud

Popular posts from this blog

Exercise 2 - Amdahl's Law

A programmer has parallelized 99% of a program, but there is no value in increasing the problem size, i.e., the program will always be run with the same problem size regardless of the number of processors or cores used. What is the expected speedup on 20 processors? Solution As per Amdahl's law, the speedup,  N - No of processors = 20 f - % of parallel operation = 99% = 1 / (1 - 0.99) + (0.99 / 20) = 1 / 0.01 + (0.99 / 20) = 16.807 The expected speedup on 20 processors is 16.807

Gaussian Elimination - Row reduction Algorithm

 Gaussian elimination is a method for solving matrix equations of the form, Ax=b.  This method is also known as the row reduction algorithm. Back  Substitution Solving the last equation for the variable and then work backward into the first equation to solve it.  The fundamental idea is to add multiples of one equation to the others in order to eliminate a variable and to continue this process until only one variable is left. Pivot row The row that is used to perform elimination of a variable from other rows is called the pivot row. Example: Solving a linear equation The augmented matrix for the above equation shall be The equation shall be solved using back substitution. The eliminating the first variable (x1) in the first row (Pivot row) by carrying out the row operation. As the second row become zero, the row will be shifted to bottom by carrying out partial pivoting. Now, the second variable (x2)  shall be eliminated by carrying out the row operation again. ...

Decision Tree Scalability Methods

 The scalable decision tree induction methods are RainForest and BOAT. RainForest method maintains an AVC set for each attribute at each node. AVC stands for Attribute Value Classlabel. BOAT , stands for Bootstrapped Optimistic Algorithm for Tree construction, uses a statistical technique known as bootstrapping., by which several smaller subsets are created. The several trees are created using the subsets and finally the full tree is generated using the trees created by smaller subsets. BOAT was found to be two to three times faster than RainForest.