Attribute subset selection reduces the data set size by removing irrelevant or redundant attributes. It is used to find a minimum set of attributes which give probability distribution of the data classes result is as close as possible to the original distribution obtained using all attributes.
It also helps to make the pattern easier to understand as it has minimum set of attributes.
Attribute subset selection techniques
Forward selection - starts with empty set and adding best set of attributes in subsequent steps
Backward elimination - starts with full set and eliminating worst set of attributes in further steps.
Decision tree - attributes that do not appear in the tree are assumed irrelevant.