The Community for Technology Leaders
Green Image
Issue No. 09 - Sept. (2016 vol. 28)
ISSN: 1041-4347
pp: 2508-2521
Bo Tang , Department of Electrical, Computer, and Biomedical Engineering, University of Rhode Island, Kingston, RI
Steven Kay , Department of Electrical, Computer, and Biomedical Engineering, University of Rhode Island, Kingston, RI
Haibo He , Department of Electrical, Computer, and Biomedical Engineering, University of Rhode Island, Kingston, RI
ABSTRACT
Automated feature selection is important for text categorization to reduce feature size and to speed up learning process of classifiers. In this paper, we present a novel and efficient feature selection framework based on the Information Theory, which aims to rank the features with their discriminative capacity for classification. We first revisit two information measures: Kullback-Leibler divergence and Jeffreys divergence for binary hypothesis testing, and analyze their asymptotic properties relating to type I and type II errors of a Bayesian classifier. We then introduce a new divergence measure, called Jeffreys-Multi-Hypothesis (JMH) divergence, to measure multi-distribution divergence for multi-class classification. Based on the JMH-divergence, we develop two efficient feature selection methods, termed maximum discrimination ($_$MD$_$ ) and methods, for text categorization. The promising results of extensive experiments demonstrate the effectiveness of the proposed approaches.
INDEX TERMS
Text categorization, Vocabulary, Bayes methods, Biomedical measurement, Computational efficiency, Frequency measurement, Support vector machines
CITATION

B. Tang, S. Kay and H. He, "Toward Optimal Feature Selection in Naive Bayes for Text Categorization," in IEEE Transactions on Knowledge & Data Engineering, vol. 28, no. 9, pp. 2508-2521, 2016.
doi:10.1109/TKDE.2016.2563436
240 ms
(Ver 3.3 (11022016))