The Community for Technology Leaders
RSS Icon
Issue No.10 - October (2009 vol.21)
pp: 1361-1371
Liangxiao Jiang , China University of Geosciences, Wuhan
Harry Zhang , University of New Brunswick, Fredericton
Zhihua Cai , China University of Geosciences, Wuhan
Because learning an optimal Bayesian network classifier is an NP-hard problem, learning-improved naive Bayes has attracted much attention from researchers. In this paper, we summarize the existing improved algorithms and propose a novel Bayes model: hidden naive Bayes (HNB). In HNB, a hidden parent is created for each attribute which combines the influences from all other attributes. We experimentally test HNB in terms of classification accuracy, using the 36 UCI data sets selected by Weka, and compare it to naive Bayes (NB), selective Bayesian classifiers (SBC), naive Bayes tree (NBTree), tree-augmented naive Bayes (TAN), and averaged one-dependence estimators (AODE). The experimental results show that HNB significantly outperforms NB, SBC, NBTree, TAN, and AODE. In many data mining applications, an accurate class probability estimation and ranking are also desirable. We study the class probability estimation and ranking performance, measured by conditional log likelihood (CLL) and the area under the ROC curve (AUC), respectively, of naive Bayes and its improved models, such as SBC, NBTree, TAN, and AODE, and then compare HNB to them in terms of CLL and AUC. Our experiments show that HNB also significantly outperforms all of them.
Naive Bayes, Bayesian network classifiers, learning algorithms, classification, class probability estimation, ranking.
Liangxiao Jiang, Harry Zhang, Zhihua Cai, "A Novel Bayes Model: Hidden Naive Bayes", IEEE Transactions on Knowledge & Data Engineering, vol.21, no. 10, pp. 1361-1371, October 2009, doi:10.1109/TKDE.2008.234
[1] N. Friedman, D. Geiger, and M. Goldszmidt, “Bayesian Network Classifiers,” Machine Learning, vol. 29, pp. 131-163, 1997.
[2] D.M. Chickering, “Learning Bayesian Networks is NP-Complete,” Learning from Data: Artificial Intelligence and Statistics V, D. Fisher and H. Lenz, eds., pp. 121-130, Springer-Verlag, 1996.
[3] C.X. Ling and H. Zhang, “Toward Bayesian Classifiers with Accurate Probabilities,” Proc. Sixth Pacific-Asia Conf. Knowledge Discovery and Data Mining (KDD '02), pp. 123-134, 2002.
[4] P. Domingos, “MetaCost: A General Method for Making Classifiers Cost Sensitive,” Proc. Fifth Int'l Conf. Knowledge Discovery and Data Mining, pp. 155-164, 1999.
[5] A.P. Bradley, “The Use of the Area Under the ROC Curve in the Evaluation of Machine Learning Algorithms,” Pattern Recognition, vol. 30, pp. 1145-1159, 1997.
[6] C.X. Ling, J. Huang, and H. Zhang, “AUC: A Statistically Consistent and More Discriminating Measure than Accuracy,” Proc. Int'l Joint Conf. Artificial Intelligence (IJCAI '03), pp. 329-341, 2003.
[7] D.J. Hand and R.J. Till, “A Simple Generalisation of the Area under the ROC Curve for Multiple Class Classification Problems,” Machine Learning, vol. 45, pp. 171-186, 2001.
[8] D. Grossman and P. Domingos, “Learning Bayesian Network Classifiers by Maximizing Conditional Likelihood,” Proc. 21st Int'l Conf. Machine Learning, pp. 361-368, 2004.
[9] J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kauhmann, 1988.
[10] E. Keogh and M. Pazzani, “Learning Augmented Bayesian Classifiers: A Comparison of Distribution-Based and Classification-Based Approaches,” Proc. Int'l Workshop Artificial Intelligence and Statistics, pp. 225-230, 1999.
[11] H. Zhang and C.X. Ling, “An Improved Learning Algorithm for Augmented Naive Bayes,” Proc. Fifth Pacific-Asia Conf. Knowledge Discovery and Data Mining (KDD '01), pp. 581-586, 2001.
[12] G.I. Webb, J. Boughton, and Z. Wang, “Not So Naive Bayes: Aggregating One-Dependence Estimators,” Machine Learning, vol. 58, pp. 5-24, 2005.
[13] P. Langley and S. Sage, “Induction of Selective Bayesian Classifiers,” Proc. 10th Conf. Uncertainty in Artificial Intelligence, pp. 339-406, 1994.
[14] L. Jiang, H. Zhang, Z. Cai, and J. Su, “Evolutional Naive Bayes,” Proc. First Int'l Symp. Intelligent Computation and Its Applications (ISICA '05), pp. 344-350, 2005.
[15] R. Kohavi and G. John, “Wrappers for Feature Subset Selection,” Artificial Intelligence J., vol. 97, nos. 1/2, pp. 273-324, 1997.
[16] C.A. Ratanamahatana and D. Gunopulos, “Scaling up the Naive Bayesian Classifier: Using Decision Trees for Feature Selection,” Proc. Workshop Data Cleaning and Preprocessing (DCAP '02), at IEEE Int'l Conf. Data Mining (ICDM '02), 2002.
[17] J.T.A.S. Ferreira, D.G.T. Denison, and D.J. Hand, “Weighted Naive Bayes Modelling for Data Mining,” Dept. of Math., Imperial College, 2001.
[18] H. Zhang and S. Sheng, “Learning Weighted Naive Bayes with Accurate Ranking,” Proc. Fourth IEEE Int'l Conf. Data Mining (ICDM '04), pp. 567-570, 2004.
[19] W. Deng, G. Wang, and Y. Wang, “Weighted Naive Bayes Classification Algorithm Based on Rough Set,” Computer Science, vol. 34, pp. 204-206, 2007.
[20] M. Hall, “A Decision Tree-Based Attribute Weighting Filter for Naive Bayes,” Knowledge-Based Systems, vol. 20, pp. 120-126, 2007.
[21] R. Kohavi, “Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid,” Proc. Second Int'l Conf. Knowledge Discovery and Data Mining (KDD '96), pp. 202-207, 1996.
[22] J.R. Quinlan, C4.5: Programs for Machine Learning. Morgan Kaufmann, 1993.
[23] E. Frank, M. Hall, and B. Pfahringer, “Locally Weighted Naive Bayes,” Proc. Conf. Uncertainty in Artificial Intelligence, pp. 249-256, 2003.
[24] Z. Zheng and G.I. Webb, “Lazy Learning of Bayesian Rules,” Machine Learning, vol. 41, no. 1, pp. 53-84, 2000.
[25] Z. Xie, W. Hsu, Z. Liu, and M. Lee, “SNNB: A Selective Neighborhood Based Naive Bayes for Lazy Learning,” Proc. Sixth Pacific-Asia Conf. Knowledge Discovery and Data Mining (KDD '02), pp. 104-114, 2002.
[26] L. Jiang, D. Wang, H. Zhang, Z. Cai, and B. Huang, “Using Instance Cloning to Improve Naive Bayes for Ranking,” Int'l J.Pattern Recognition and Artificial Intelligence, vol. 22, no. 6, pp.1121-1140, 2008.
[27] C. Merz, P. Murphy, and D. Aha, “UCI Repository of Machine Learning Databases,” Dept. of ICS, Univ. of California, , 1997.
[28] I.H. Witten and E. Frank, Data Mining: Practical Machine Learning Tools and Techniques, second ed. Morgan Kaufmann,, 2005.
[29] C. Nadeau and Y. Bengio, “Inference for The Generalization Error,” Advances in Neural Information Processing Systems, vol. 12, pp. 307-313, MIT Press, 1999.
[30] J. Sun, C. Wang, and S. Chen, “A Double Layer Bayesian Classifier,” Proc. Fourth Int'l Conf. Fuzzy Systems and Knowledge Discovery (FSKD '07), vol. 1, pp. 540-544, 2007.
7 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool