Subscribe

Issue No.01 - Jan. (2014 vol.26)

pp: 16-29

Pradipta Maji , Machine Intell. Unit, Indian Stat. Inst., Kolkata, India

DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TKDE.2012.242

ABSTRACT

The selection of relevant and significant features is an important problem particularly for data sets with large number of features. In this regard, a new feature selection algorithm is presented based on a rough hypercuboid approach. It selects a set of features from a data set by maximizing the relevance, dependency, and significance of the selected features. By introducing the concept of the hypercuboid equivalence partition matrix, a novel representation of degree of dependency of sample categories on features is proposed to measure the relevance, dependency, and significance of features in approximation spaces. The equivalence partition matrix also offers an efficient way to calculate many more quantitative measures to describe the inexactness of approximate classification. Several quantitative indices are introduced based on the rough hypercuboid approach for evaluating the performance of the proposed method. The superiority of the proposed method over other feature selection methods, in terms of computational complexity and classification accuracy, is established extensively on various real-life data sets of different sizes and dimensions.

INDEX TERMS

Approximation methods, Rough sets, Data analysis, Uncertainty, Data mining, Redundancy,rough hypercuboid approach, Pattern recognition, data mining, feature selection, rough sets

CITATION

Pradipta Maji, "A Rough Hypercuboid Approach for Feature Selection in Approximation Spaces",

*IEEE Transactions on Knowledge & Data Engineering*, vol.26, no. 1, pp. 16-29, Jan. 2014, doi:10.1109/TKDE.2012.242REFERENCES