Skip to main content

Clustering Analysis

Clustering is the process of grouping a set of data objects into multiple groups so that objects within a group have high similarity, but are very dissimilar to objects in other groups.

The clustering would help to discover previously unknown groups within the data.

The cluster analysis can be used to gain insight into the distribution of data, to observe the characteristics of each cluster, and to focus on a particular set of clusters for further analysis.

The main focus has been given to distance-based cluster analysis. The Cluster analysis tools based on k-means, k-medoids, and several other methods also have been built into many statistical analysis software packages.

The clustering is a form of learning by observation, rather than learning by examples.

Requirement of Clustering

  • Scalability
  • Ability to deal various types of attributes
  • Discovery of clusters with arbitrary shape
  • Requirement for domain knowledge
  • Ability to deal with noisy data
  • Incremental clustering and insensitivity to input order
  • Capability of clustering high dimensionality data
  • Constraint based clustering
  • Interpretability and usability

Many clustering algorithms work well on small data sets whereas it leads to biased results when sample of large data sets given.

Many clustering algorithms are designed to work on numeric type data. However, algorithms are preferred to work in binary, nominal, ordinal and complex data types.

Many clustering algorithms determine clusters based on Euclidean or Manhattan distance measures.

Many clustering algorithms require users to provide domain knowledge in the form of input parameters.

The clustering methods can differ with respect to the partitioning level, whether or not clusters are mutually exclusive, the similarity measures used, and whether or not subspace clustering is performed.

Clustering Methods

Partitioning method

Most partitioning methods are distance-based. The partitioning method creates an initial partitioning based on given k, the number of partitions to construct using mean or medoid. It is effective for small to medium datasets.

Hierarchical method

A hierarchical method creates a hierarchical decomposition of the given set of data objects. It can be classified as being either agglomerative (bottom up) or divisive (top down), based on how the hierarchical decomposition is formed. Hierarchical clustering methods can be distance-based or density and continuity based. The merged or split data cannot be corrected.

Density-based method

Density-based method can divide a set of objects into multiple exclusive clusters, or a hierarchy of clusters. The clustering methods have been developed based on the notion of density. It can be used to find arbitrarily shaped clusters and filter out Outliers.

Grid-based method

Grid-based methods quantize the object space into a finite number of cells that form a grid structure. The main advantage of this approach is its fast processing time, which is typically independent of the number of data objects and dependent only on the number of cells in each dimension in the quantized space.

Popular posts from this blog

Exercise 2 - Amdahl's Law

A programmer has parallelized 99% of a program, but there is no value in increasing the problem size, i.e., the program will always be run with the same problem size regardless of the number of processors or cores used. What is the expected speedup on 20 processors? Solution As per Amdahl's law, the speedup,  N - No of processors = 20 f - % of parallel operation = 99% = 1 / (1 - 0.99) + (0.99 / 20) = 1 / 0.01 + (0.99 / 20) = 16.807 The expected speedup on 20 processors is 16.807

Exercise 1 - Amdahl's Law

A programmer is given the job to write a program on a computer with processor having speedup factor 3.8 on 4 processors. He makes it 95% parallel and goes home dreaming of a big pay raise. Using Amdahl’s law, and assuming the problem size is the same as the serial version, and ignoring communication costs, what is the speedup factor that the programmer will get? Solution Speedup formula as per Amdahl's Law, N - no of processor = 4 f - % of parallel operation = 95% Speedup = 1 / (1 - 0.95) + (0.95/4) = 1 / 0.5 + (0.95/4) Speedup = 3.478 The programmer gets  3.478 as t he speedup factor.

BITS Work Integrated Learning Program - M.Tech Data Science & Engineering

     BITS Pilani offers work integrated learning program (WILP) on M.Tech Data Science and Engineering which is UGC approved. The course is a four semester programme designed to help work professionals to build their skills required for data science engineering which enable them to become a Data Scientist.  It is a 100% online course and lectures would be delivered by BITS Pilani faculty on weekends. Those who are working in software industry as Software Engineer, Programmer, Data Analyst, Business Analyst can apply for the course. Minimum eligibility criteria to apply for the course. Those who are employed holding B.E/B.Tech/MCA/M.Sc or Equivalent with 60% marks and minimum one year relevant work experience. The candidates should have basic programming and engineering mathematics knowledge. The following subjects shall be covered in the course. Semester Subjects Data Mining Mathematical Fundamentals for Data Science Data Structure and Algorithms Design Co...