Skip to main content

Association rules analysis - Apriori algorithm

A set of items together is called an itemset. An itemset consists of two or more items. An itemset that occurs frequently is called a frequent itemset.

E.g., Bread and butter, Milk and Diaper

A set of items is called frequent if it satisfies a minimum threshold value for support and confidence. Support shows transactions with items purchased together in a single transaction. Confidence shows transactions where the items are purchased one after the other.

Only those transactions which meet minimum threshold support and confidence requirements shall be considered for frequent itemset mining method.

The frequent mining algorithm is an efficient algorithm to mine the hidden patterns of item sets within a short time and less memory consumption.

Association rules analysis is a technique to uncover how items are associated to each other. There are three common ways to measure association. Support, Confidence and Lift.

Market Basket Analysis is a popular application of Association Rules.

For example, Bread => butter [Support 5% , Confidence 75%]

The above statement refers that there is a 5% transaction in which bread and butter were bought together and there are 75% of customers who bought bread as well as butter.

Support and Confidence for Itemset A and B are represented by formulas

Support (A) = No of transaction in which A appears / Total no of transactions

Confidence (A -> B) = Support (A U B) / Support (A)

Apriori algorithm was the first algorithm that was proposed for frequent itemset mining. This algorithm uses two steps “join” and “prune” to reduce the search space. It is an iterative approach to discover the most frequent item sets.

If an itemset set has value less than minimum support then all of its supersets will also fall below min support, and thus can be ignored. This property is called the Antimonotone property.

The 'Join' step generates itemset from k item sets by joining each item with itself. The 'Prune' step scans the count of each item in the database. If the candidate item does not meet minimum support, then it is regarded as infrequent and removed.

To discover a frequent pattern of size 100, one need to generate 2100 candidates.

i.e, No of candidates to be generated for 'k' size shall be 2k.

Popular posts from this blog

Gaussian Elimination - Row reduction Algorithm

 Gaussian elimination is a method for solving matrix equations of the form, Ax=b.  This method is also known as the row reduction algorithm. Back  Substitution Solving the last equation for the variable and then work backward into the first equation to solve it.  The fundamental idea is to add multiples of one equation to the others in order to eliminate a variable and to continue this process until only one variable is left. Pivot row The row that is used to perform elimination of a variable from other rows is called the pivot row. Example: Solving a linear equation The augmented matrix for the above equation shall be The equation shall be solved using back substitution. The eliminating the first variable (x1) in the first row (Pivot row) by carrying out the row operation. As the second row become zero, the row will be shifted to bottom by carrying out partial pivoting. Now, the second variable (x2)  shall be eliminated by carrying out the row operation again. ...

Decision Tree - Gini Index

The Gini index is used in CART. The Gini index measures the impurity of the data set, where p i - probability that data in the data set, D belong to class, C i  and pi = |C i,D |/|D| There are 2 v - 2 possible ways to form two partitions of the data set, D based on a binary split on a attribute. Each of the possible binary splits of the attribute is considered. The subset that gives the minimum Gini index is selected as the splitting subset for discrete valued attribute. The degree of Gini index varies between 0 and 1. The value 0 denotes that all elements belong to a certain class or if there exists only one class, and the value 1 denotes that the elements are randomly distributed across various classes. A Gini Index of 0.5 denotes equally distributed elements into some classes. The Gini index is biased toward multivalued attributes and has difficulty when the number of classes is large.

Data Cleaning Process

Data cleaning attempt to fill in missing values, smooth out noise while identifying  outliers, and correct inconsistencies in the data. Data cleaning is performed as an iterative two-step process consisting of discrepancy detection and data transformation. The missing values of the attribute can be addressed by Ignoring the value filling the value manually Using global constant to fill the value Using a measure of central tendency (mean or median) of value Using attribute mean or median belonging to same class Using the most probable value Noise is a random error or variance in a measured variable. The noisy data can be smoothened using following techniques. Binning methods smooth a sorted data value by consulting the nearby values around it. smoothing by bin means - each  value in a bin is replaced by the mean value of the bin. smoothing by bin medians - each bin value  is replaced by the bin median smoothing by bin boundaries - the minimum and maximum values in a ...