Skip to main content

Clustering Analysis

Clustering is the process of grouping a set of data objects into multiple groups so that objects within a group have high similarity, but are very dissimilar to objects in other groups.

The clustering would help to discover previously unknown groups within the data.

The cluster analysis can be used to gain insight into the distribution of data, to observe the characteristics of each cluster, and to focus on a particular set of clusters for further analysis.

The main focus has been given to distance-based cluster analysis. The Cluster analysis tools based on k-means, k-medoids, and several other methods also have been built into many statistical analysis software packages.

The clustering is a form of learning by observation, rather than learning by examples.

Requirement of Clustering

  • Scalability
  • Ability to deal various types of attributes
  • Discovery of clusters with arbitrary shape
  • Requirement for domain knowledge
  • Ability to deal with noisy data
  • Incremental clustering and insensitivity to input order
  • Capability of clustering high dimensionality data
  • Constraint based clustering
  • Interpretability and usability

Many clustering algorithms work well on small data sets whereas it leads to biased results when sample of large data sets given.

Many clustering algorithms are designed to work on numeric type data. However, algorithms are preferred to work in binary, nominal, ordinal and complex data types.

Many clustering algorithms determine clusters based on Euclidean or Manhattan distance measures.

Many clustering algorithms require users to provide domain knowledge in the form of input parameters.

The clustering methods can differ with respect to the partitioning level, whether or not clusters are mutually exclusive, the similarity measures used, and whether or not subspace clustering is performed.

Clustering Methods

Partitioning method

Most partitioning methods are distance-based. The partitioning method creates an initial partitioning based on given k, the number of partitions to construct using mean or medoid. It is effective for small to medium datasets.

Hierarchical method

A hierarchical method creates a hierarchical decomposition of the given set of data objects. It can be classified as being either agglomerative (bottom up) or divisive (top down), based on how the hierarchical decomposition is formed. Hierarchical clustering methods can be distance-based or density and continuity based. The merged or split data cannot be corrected.

Density-based method

Density-based method can divide a set of objects into multiple exclusive clusters, or a hierarchy of clusters. The clustering methods have been developed based on the notion of density. It can be used to find arbitrarily shaped clusters and filter out Outliers.

Grid-based method

Grid-based methods quantize the object space into a finite number of cells that form a grid structure. The main advantage of this approach is its fast processing time, which is typically independent of the number of data objects and dependent only on the number of cells in each dimension in the quantized space.

Popular posts from this blog

Gaussian Elimination - Row reduction Algorithm

 Gaussian elimination is a method for solving matrix equations of the form, Ax=b.  This method is also known as the row reduction algorithm. Back  Substitution Solving the last equation for the variable and then work backward into the first equation to solve it.  The fundamental idea is to add multiples of one equation to the others in order to eliminate a variable and to continue this process until only one variable is left. Pivot row The row that is used to perform elimination of a variable from other rows is called the pivot row. Example: Solving a linear equation The augmented matrix for the above equation shall be The equation shall be solved using back substitution. The eliminating the first variable (x1) in the first row (Pivot row) by carrying out the row operation. As the second row become zero, the row will be shifted to bottom by carrying out partial pivoting. Now, the second variable (x2)  shall be eliminated by carrying out the row operation again. ...

Decision Tree - Gini Index

The Gini index is used in CART. The Gini index measures the impurity of the data set, where p i - probability that data in the data set, D belong to class, C i  and pi = |C i,D |/|D| There are 2 v - 2 possible ways to form two partitions of the data set, D based on a binary split on a attribute. Each of the possible binary splits of the attribute is considered. The subset that gives the minimum Gini index is selected as the splitting subset for discrete valued attribute. The degree of Gini index varies between 0 and 1. The value 0 denotes that all elements belong to a certain class or if there exists only one class, and the value 1 denotes that the elements are randomly distributed across various classes. A Gini Index of 0.5 denotes equally distributed elements into some classes. The Gini index is biased toward multivalued attributes and has difficulty when the number of classes is large.

Data Cleaning Process

Data cleaning attempt to fill in missing values, smooth out noise while identifying  outliers, and correct inconsistencies in the data. Data cleaning is performed as an iterative two-step process consisting of discrepancy detection and data transformation. The missing values of the attribute can be addressed by Ignoring the value filling the value manually Using global constant to fill the value Using a measure of central tendency (mean or median) of value Using attribute mean or median belonging to same class Using the most probable value Noise is a random error or variance in a measured variable. The noisy data can be smoothened using following techniques. Binning methods smooth a sorted data value by consulting the nearby values around it. smoothing by bin means - each  value in a bin is replaced by the mean value of the bin. smoothing by bin medians - each bin value  is replaced by the bin median smoothing by bin boundaries - the minimum and maximum values in a ...