Skip to main content

Clustering Evaluation

The major tasks of clustering evaluation are as follows

Clustering tendency assessment

The assessment to be carried out to check whether nonrandom structure exists in the given data set.

The Hopkins Statistic is a spatial statistic that tests the spatial randomness of a variable as distributed in a space. The Hopkins statistic value would be about 0.5 if data set are uniformly distributed. If the data set are highly skewed, then H would be close to 0. The uniformly distributed data set contains no meaningful clusters, that is if Hopkin statistic value is greater than 0.5 then it is unlikely that the data set has significant clusters.

No of Clusters determination

The number of clusters can be regarded as an important summary statistic of a data set. It is desirable to estimate the number of clusters even before a clustering algorithm is used to derive detailed clusters.

The appropriate number of clusters controls the proper granularity of cluster analysis. The right number of clusters depends on the distribution’s shape and scale in the data set, as well as the clustering resolution required.

A simple method is to set the number of clusters to about square root of half of the data set points(n). The elbow method is based on the observation that increasing the number of clusters can help to reduce the sum of within-cluster variance of each cluster. The heuristic method of selecting the right number of clusters is to use the turning point in the curve of the sum of within cluster variance with respect to the number of clusters. The right number of clusters in a data set can also be determined by cross validation, a technique often used in classification.

Cluster quality measurement

A number of measures can be used to measure how well the clusters fit the data set and how well the clusters match the ground truth. The methods for measuring the quality of clustering are categorized into two based on whether ground truth (Ideal Clustering) is available or not. The method is called as Extrinsic methods if ground truth is available, else it is called as Intrinsic methods which evaluate the goodness of a clustering by considering how well the clusters are separated. Extrinsic method is also known as Supervised method and Intrinsic method is known as Unsupervised method. The measurement of clustering quality is effective if it satisfies the following criteria.

  • Cluster homogeneity
  • Cluster completeness
  • Rag bag
  • Small cluster preservation

Popular posts from this blog

Gaussian Elimination - Row reduction Algorithm

 Gaussian elimination is a method for solving matrix equations of the form, Ax=b.  This method is also known as the row reduction algorithm. Back  Substitution Solving the last equation for the variable and then work backward into the first equation to solve it.  The fundamental idea is to add multiples of one equation to the others in order to eliminate a variable and to continue this process until only one variable is left. Pivot row The row that is used to perform elimination of a variable from other rows is called the pivot row. Example: Solving a linear equation The augmented matrix for the above equation shall be The equation shall be solved using back substitution. The eliminating the first variable (x1) in the first row (Pivot row) by carrying out the row operation. As the second row become zero, the row will be shifted to bottom by carrying out partial pivoting. Now, the second variable (x2)  shall be eliminated by carrying out the row operation again. ...

Decision Tree - Gini Index

The Gini index is used in CART. The Gini index measures the impurity of the data set, where p i - probability that data in the data set, D belong to class, C i  and pi = |C i,D |/|D| There are 2 v - 2 possible ways to form two partitions of the data set, D based on a binary split on a attribute. Each of the possible binary splits of the attribute is considered. The subset that gives the minimum Gini index is selected as the splitting subset for discrete valued attribute. The degree of Gini index varies between 0 and 1. The value 0 denotes that all elements belong to a certain class or if there exists only one class, and the value 1 denotes that the elements are randomly distributed across various classes. A Gini Index of 0.5 denotes equally distributed elements into some classes. The Gini index is biased toward multivalued attributes and has difficulty when the number of classes is large.

Data Cleaning Process

Data cleaning attempt to fill in missing values, smooth out noise while identifying  outliers, and correct inconsistencies in the data. Data cleaning is performed as an iterative two-step process consisting of discrepancy detection and data transformation. The missing values of the attribute can be addressed by Ignoring the value filling the value manually Using global constant to fill the value Using a measure of central tendency (mean or median) of value Using attribute mean or median belonging to same class Using the most probable value Noise is a random error or variance in a measured variable. The noisy data can be smoothened using following techniques. Binning methods smooth a sorted data value by consulting the nearby values around it. smoothing by bin means - each  value in a bin is replaced by the mean value of the bin. smoothing by bin medians - each bin value  is replaced by the bin median smoothing by bin boundaries - the minimum and maximum values in a ...