The Community for Technology Leaders
RSS Icon
Issue No.01 - Jan. (2014 vol.26)
pp: 16-29
Pradipta Maji , Machine Intell. Unit, Indian Stat. Inst., Kolkata, India
The selection of relevant and significant features is an important problem particularly for data sets with large number of features. In this regard, a new feature selection algorithm is presented based on a rough hypercuboid approach. It selects a set of features from a data set by maximizing the relevance, dependency, and significance of the selected features. By introducing the concept of the hypercuboid equivalence partition matrix, a novel representation of degree of dependency of sample categories on features is proposed to measure the relevance, dependency, and significance of features in approximation spaces. The equivalence partition matrix also offers an efficient way to calculate many more quantitative measures to describe the inexactness of approximate classification. Several quantitative indices are introduced based on the rough hypercuboid approach for evaluating the performance of the proposed method. The superiority of the proposed method over other feature selection methods, in terms of computational complexity and classification accuracy, is established extensively on various real-life data sets of different sizes and dimensions.
Approximation methods, Rough sets, Data analysis, Uncertainty, Data mining, Redundancy,rough hypercuboid approach, Pattern recognition, data mining, feature selection, rough sets
Pradipta Maji, "A Rough Hypercuboid Approach for Feature Selection in Approximation Spaces", IEEE Transactions on Knowledge & Data Engineering, vol.26, no. 1, pp. 16-29, Jan. 2014, doi:10.1109/TKDE.2012.242
[1] R.O. Duda, P.E. Hart, and D.G. Stork, Pattern Classification and Scene Analysis. John Wiley & Sons, Inc., 1999.
[2] P.A. Devijver and J. Kittler, Pattern Recognition: A Statistical Approach. Prentice Hall, 1982.
[3] Z. Pawlak, Rough Sets: Theoretical Aspects of Reasoning About Data. Kluwer, 1991.
[4] L. Polkowski, Rough Sets. Physica-Verlag, 2002.
[5] A. Skowron and C. Rauszer, "The Discernibility Matrices and Functions in Information Systems," Intelligent Decision Support, R. Slowinski, ed., pp. 331-362, Kluwer Academic Publishers, 1992.
[6] A. Skowron, "Extracting Laws from Decision Tables: A Rough Set Approach," Computational Intelligence, vol. 11, pp. 371-388, 1995.
[7] A. Chouchoulas and Q. Shen, "Rough Set-Aided Keyword Reduction for Text Categorisation," Applied Artificial Intelligence, vol. 15, no. 9, pp. 843-873, 2001.
[8] A. Skowron and S.K. Pal, "Rough Sets, Pattern Recognition, and Data Mining," Pattern Recognition Letters, vol. 24, no. 6, pp. 829-933, 2003.
[9] R.W. Swiniarski and A. Skowron, "Rough Set Methods in Feature Selection and Recognition," Pattern Recognition Letters, vol. 24, pp. 833-849, 2003.
[10] N. Parthalain, Q. Shen, and R. Jensen, "A Distance Measure Approach to Exploring the Rough Set Boundary Region for Attribute Reduction," IEEE Trans. Knowledge and Data Eng., vol. 22, no. 3, pp. 305-317, Mar. 2010.
[11] S.H. Nguyen and A. Skowron, "Quantization of Real Value Attributes - Rough Set and Boolean Reasoning Approach," Proc. Second Joint Ann. Conf. Information Sciences, pp. 34-37, 1995.
[12] D. Dubois and H. Prade, "Rough Fuzzy Sets and Fuzzy Rough Sets," Int'l J. General Systems, vol. 17, pp. 191-209, 1990.
[13] Rough-Fuzzy Hybridization: A New Trend in Decision Making, S.K. Pal and A. Skowron, eds., Springer-Verlag, 1999.
[14] D.S. Yeung, D. Chen, E.C.C. Tsang, J.W.T. Lee, and W. Xizhao, "On the Generalization of Fuzzy Rough Sets," IEEE Trans. Fuzzy Systems, vol. 13, no. 3, pp. 343-361, June 2005.
[15] X. Liu, W. Pedrycz, T. Chai, and M. Song, "The Development of Fuzzy Rough Sets with the Use of Structures and Algebras of Axiomatic Fuzzy Sets," IEEE Trans. Knowledge and Data Eng., vol. 21, no. 3, pp. 443-462, Mar. 2009.
[16] P. Maji and S.K. Pal, Rough-Fuzzy Pattern Recognition: Applications in Bioinformatics and Medical Imaging. John Wiley & Sons, Inc., 2012.
[17] R. Jensen and Q. Shen, "Semantics-Preserving Dimensionality Reduction: Rough and Fuzzy-Rough-Based Approach," IEEE Trans. Knowledge and Data Eng., vol. 16, no. 12, pp. 1457-1471, Dec. 2004.
[18] Q. Hu, D. Yu, Z. Xie, and J. Liu, "Fuzzy Probabilistic Approximation Spaces and Their Information Measures," IEEE Trans. Fuzzy Systems, vol. 14, no. 2, pp. 191-201, Dec. 2007.
[19] R. Jensen and Q. Shen, "Fuzzy-Rough Sets Assisted Attribute Selection," IEEE Trans. Fuzzy Systems, vol. 15, pp. 73-89, Feb. 2007.
[20] P. Maji and S.K. Pal, "Feature Selection Using $f$ -Information Measures in Fuzzy Approximation Spaces," IEEE Trans. Knowledge and Data Eng., vol. 22, no. 6, pp. 854-867, June 2010.
[21] E.C.C. Tsang, D. Chen, D.S. Yeung, X.-Z. Wang, and J. Lee, "Attributes Reduction Using Fuzzy Rough Sets," IEEE Trans. Fuzzy Systems, vol. 16, no. 5, pp. 1130-1141, Oct. 2008.
[22] R. Jensen and Q. Shen, "New Approaches to Fuzzy-Rough Feature Selection," IEEE Trans. Fuzzy Systems, vol. 17, no. 4, pp. 824-838, Aug. 2009.
[23] Q. Hu, D. Yu, J. Liu, and C. Wu, "Neighborhood Rough Set Based Heterogeneous Feature Subset Selection," Information Sciences, vol. 178, pp. 3577-3594, 2008.
[24] J.-M. Wei, S.-Q. Wang, and X.-J. Yuan, "Ensemble Rough Hypercuboid Approach for Classifying Cancers," IEEE Trans. Knowledge and Data Eng., vol. 22, no. 3, pp. 381-391, Mar. 2010.
[25] P. Maji and S. Paul, "Rough Set Based Maximum Relevance-Maximum Significance Criterion and Gene Selection from Microarray Data," Int'l J. Approximate Reasoning, vol. 52, no. 3, pp. 408-426, 2011.
[26] J.R. Quinlan, "Induction of Decision Trees," Machine Learning, vol. 1, pp. 81-106, 1986.
[27] K. Kira and L.A. Rendell, "The Feature Selection Problem: Traditional Methods and A New Algorithm," Proc. 10th Nat'l Conf. Artificial Intelligence, pp. 129-134, 1992.
[28] I. Guyon and A. Elisseeff, "An Introduction to Variable and Feature Selection," J. Machine Learning Research, vol. 3, pp. 1157-1182, 2003.
[29] H. Peng, F. Long, and C. Ding, "Feature Selection Based on Mutual Information: Criteria of Max-Dependency, Max-Relevance, and Min-Redundancy," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 8, pp. 1226-1238, Aug. 2005.
[30] Y. Qian, J. Liang, W. Pedrycz, and C. Dang, "Positive Approximation: An Accelerator for Attribute Reduction in Rough Set Theory," Artificial Intelligence, vol. 174, pp. 597-618, 2010.
[31] M. Dash and H. Liu, "Unsupervised Feature Selection," Proc. Pacific Asia Conf. Knowledge Discovery and Data Mining, pp. 110-121, 2000.
[32] V. Vapnik, The Nature of Statistical Learning Theory. Springer-Verlag, 1995.
[33] J.R. Quinlan, C4.5: Programs for Machine Learning, Morgan Kaufmann, 1993.
[34] R. Gilad-Bachrach, A. Navot, and N. Tishby, "Margin Based Feature Selection: Theory and Algorithms," Proc. 21st Int'l Conf. Machine Learning, 2004.
[35] X.-W. Chen and M. Wasikowski, "FAST: A ROC-Based Feature Selection Metric for Small Samples and Imbalanced Data Classification Problems," Proc. 14th ACM SIGKDD Int'l Conf. Knowledge Discovery and Data Mining, 2008.
102 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool