Skip to main content

Density based Clustering

Partitioning and hierarchical methods are designed to find spherical-shaped clusters. They often fail to find clusters of arbitrary shape.
The clusters can be found in arbitrary shape as dense region separated by sparse regions in the data space. The methods used in the density based are
  • DBSCAN (Density Based Spatial Clustering of Applications with Noise)
  • OPTICS (Ordering Points to Identify the Clustering Structure)
  • DENCLUE (Clustering Based on Density Distribution Functions)
DBSCAN method finds core objects that have dense neighborhoods. It connects core objects and their neighborhoods to form dense regions as clusters. A user-specified parameter ε > 0 is used to specify the radius of a neighborhood we consider for every object. The method uses a parameter, MinPts, which specifies the density threshold of dense regions. An object is a core object if the ε-neighborhood of the object contains at least MinPts objects.
All core objects can be identified with respect to the given parameters, ε and MinPts. The clustering task is therein reduced to using core objects and their neighborhoods to form clusters.
An object is density reachable from another object with respect to ε and MinPts in an object set if there is a chain of objects which are directly density reachable from the object with respect to ε and MinPts. The objects are density reachable to one another only if both are core objects. To connect core objects as well as their neighbors in a dense region, the method uses the notion of density-connectedness. The two objects are density connected with respect to ε and MinPts if there is an object in a set such that both objects are density reachable from the object with respect to ε and MinPts. 
The time complexity of DBSCAN is O(nlog n) if spatial index is used, otherwise O(n2).

Popular posts from this blog

Exercise 2 - Amdahl's Law

A programmer has parallelized 99% of a program, but there is no value in increasing the problem size, i.e., the program will always be run with the same problem size regardless of the number of processors or cores used. What is the expected speedup on 20 processors? Solution As per Amdahl's law, the speedup,  N - No of processors = 20 f - % of parallel operation = 99% = 1 / (1 - 0.99) + (0.99 / 20) = 1 / 0.01 + (0.99 / 20) = 16.807 The expected speedup on 20 processors is 16.807

Decision Tree Classification

 A decision tree is a flowchart-like tree structure. The topmost node in a tree is the root node. The each internal node (non-leaf node) denotes a test on an attribute and each branch represents an outcome of the test. The each leaf node (or terminal node) holds a class label. Decision trees can handle multidimensional data.  Some of the decision tree algorithms are Iterative Dichotomiser (ID3), C4.5 (a successor of ID3), Classification and Regression Trees (CART). Most algorithms for decision tree induction  follow a top-down approach.  The tree starts with a training set of tuples and their associated class labels. The algorithm is called with data partition, attribute list, and attribute selection method, where the data partition is the complete set of training tuples and their associated class labels. The splitting criterion is determined by attribute selection method which indicates the splitting attribute that may be splitting point or splitting subset. Attribu...

Exercise 1 - Amdahl's Law

A programmer is given the job to write a program on a computer with processor having speedup factor 3.8 on 4 processors. He makes it 95% parallel and goes home dreaming of a big pay raise. Using Amdahl’s law, and assuming the problem size is the same as the serial version, and ignoring communication costs, what is the speedup factor that the programmer will get? Solution Speedup formula as per Amdahl's Law, N - no of processor = 4 f - % of parallel operation = 95% Speedup = 1 / (1 - 0.95) + (0.95/4) = 1 / 0.5 + (0.95/4) Speedup = 3.478 The programmer gets  3.478 as t he speedup factor.