Skip to main content

Clustering Evaluation

The major tasks of clustering evaluation are as follows

Clustering tendency assessment

The assessment to be carried out to check whether nonrandom structure exists in the given data set.

The Hopkins Statistic is a spatial statistic that tests the spatial randomness of a variable as distributed in a space. The Hopkins statistic value would be about 0.5 if data set are uniformly distributed. If the data set are highly skewed, then H would be close to 0. The uniformly distributed data set contains no meaningful clusters, that is if Hopkin statistic value is greater than 0.5 then it is unlikely that the data set has significant clusters.

No of Clusters determination

The number of clusters can be regarded as an important summary statistic of a data set. It is desirable to estimate the number of clusters even before a clustering algorithm is used to derive detailed clusters.

The appropriate number of clusters controls the proper granularity of cluster analysis. The right number of clusters depends on the distribution’s shape and scale in the data set, as well as the clustering resolution required.

A simple method is to set the number of clusters to about square root of half of the data set points(n). The elbow method is based on the observation that increasing the number of clusters can help to reduce the sum of within-cluster variance of each cluster. The heuristic method of selecting the right number of clusters is to use the turning point in the curve of the sum of within cluster variance with respect to the number of clusters. The right number of clusters in a data set can also be determined by cross validation, a technique often used in classification.

Cluster quality measurement

A number of measures can be used to measure how well the clusters fit the data set and how well the clusters match the ground truth. The methods for measuring the quality of clustering are categorized into two based on whether ground truth (Ideal Clustering) is available or not. The method is called as Extrinsic methods if ground truth is available, else it is called as Intrinsic methods which evaluate the goodness of a clustering by considering how well the clusters are separated. Extrinsic method is also known as Supervised method and Intrinsic method is known as Unsupervised method. The measurement of clustering quality is effective if it satisfies the following criteria.

  • Cluster homogeneity
  • Cluster completeness
  • Rag bag
  • Small cluster preservation

Popular posts from this blog

Exercise 2 - Amdahl's Law

A programmer has parallelized 99% of a program, but there is no value in increasing the problem size, i.e., the program will always be run with the same problem size regardless of the number of processors or cores used. What is the expected speedup on 20 processors? Solution As per Amdahl's law, the speedup,  N - No of processors = 20 f - % of parallel operation = 99% = 1 / (1 - 0.99) + (0.99 / 20) = 1 / 0.01 + (0.99 / 20) = 16.807 The expected speedup on 20 processors is 16.807

Decision Tree Classification

 A decision tree is a flowchart-like tree structure. The topmost node in a tree is the root node. The each internal node (non-leaf node) denotes a test on an attribute and each branch represents an outcome of the test. The each leaf node (or terminal node) holds a class label. Decision trees can handle multidimensional data.  Some of the decision tree algorithms are Iterative Dichotomiser (ID3), C4.5 (a successor of ID3), Classification and Regression Trees (CART). Most algorithms for decision tree induction  follow a top-down approach.  The tree starts with a training set of tuples and their associated class labels. The algorithm is called with data partition, attribute list, and attribute selection method, where the data partition is the complete set of training tuples and their associated class labels. The splitting criterion is determined by attribute selection method which indicates the splitting attribute that may be splitting point or splitting subset. Attribu...

Exercise 1 - Amdahl's Law

A programmer is given the job to write a program on a computer with processor having speedup factor 3.8 on 4 processors. He makes it 95% parallel and goes home dreaming of a big pay raise. Using Amdahl’s law, and assuming the problem size is the same as the serial version, and ignoring communication costs, what is the speedup factor that the programmer will get? Solution Speedup formula as per Amdahl's Law, N - no of processor = 4 f - % of parallel operation = 95% Speedup = 1 / (1 - 0.95) + (0.95/4) = 1 / 0.5 + (0.95/4) Speedup = 3.478 The programmer gets  3.478 as t he speedup factor.