Issue No. 07 - July (2011 vol. 23)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TKDE.2010.160
Murat Can Ganiz , Dogus University, Istanbul
Cibin George , Rutgers University, Piscataway
William M. Pottenger , Rutgers University, Piscataway
The underlying assumption in traditional machine learning algorithms is that instances are Independent and Identically Distributed (IID). These critical independence assumptions made in traditional machine learning algorithms prevent them from going beyond instance boundaries to exploit latent relations between features. In this paper, we develop a general approach to supervised learning by leveraging higher order dependencies between features. We introduce a novel Bayesian framework for classification termed Higher Order Naïve Bayes (HONB). Unlike approaches that assume data instances are independent, HONB leverages higher order relations between features across different instances. The approach is validated in the classification domain on widely used benchmark data sets. Results obtained on several benchmark text corpora demonstrate that higher order approaches achieve significant improvements in classification accuracy over the baseline methods, especially when training data is scarce. A complexity analysis also reveals that the space and time complexity of HONB compare favorably with existing approaches.
Machine learning, statistical relational learning, naïve bayes, text classification, IID.
M. C. Ganiz, W. M. Pottenger and C. George, "Higher Order Naïve Bayes: A Novel Non-IID Approach to Text Classification," in IEEE Transactions on Knowledge & Data Engineering, vol. 23, no. , pp. 1022-1034, 2010.