
This Article  
 
Share  
Bibliographic References  
Add to:  
Digg Furl Spurl Blink Simpy Del.icio.us Y!MyWeb  
Search  
 
ASCII Text  x  
Huanhuan Chen, Peter Tiňo, Xin Yao, "Predictive Ensemble Pruning by Expectation Propagation," IEEE Transactions on Knowledge and Data Engineering, vol. 21, no. 7, pp. 9991013, July, 2009.  
BibTex  x  
@article{ 10.1109/TKDE.2009.62, author = {Huanhuan Chen and Peter Tiňo and Xin Yao}, title = {Predictive Ensemble Pruning by Expectation Propagation}, journal ={IEEE Transactions on Knowledge and Data Engineering}, volume = {21}, number = {7}, issn = {10414347}, year = {2009}, pages = {9991013}, doi = {http://doi.ieeecomputersociety.org/10.1109/TKDE.2009.62}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }  
RefWorks Procite/RefMan/Endnote  x  
TY  JOUR JO  IEEE Transactions on Knowledge and Data Engineering TI  Predictive Ensemble Pruning by Expectation Propagation IS  7 SN  10414347 SP999 EP1013 EPD  9991013 A1  Huanhuan Chen, A1  Peter Tiňo, A1  Xin Yao, PY  2009 KW  Machine learning KW  probabilistic algorithms KW  ensemble learning KW  regression KW  classification. VL  21 JA  IEEE Transactions on Knowledge and Data Engineering ER   
[1] L.K. Hansen and P. Salamon, “Neural Network Ensembles,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 12, no. 10, pp.9931001, Oct. 1990.
[2] L. Breiman, “Bagging Predictors,” Machine Learning, vol. 24, no. 2, pp.123140, 1996.
[3] R.E. Schapire, “A Brief Introduction to Boosting,” Proc. 16th Int'l Joint Conf. Artificial Intelligence, pp.14011406, 1999.
[4] L. Breiman, “Arcing Classifier,” Annals of Statistics, vol. 26, no. 3, pp.801849, 1998.
[5] L. Breiman, “Random Forests,” Machine Learning, vol. 45, no. 1, pp.532, 2001.
[6] J.J. Rodriguez, L.I. Kuncheva, and C.J. Alonso, “Rotation Forest: A New Classifier Ensemble Method,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 10, pp.16191630, Oct. 2006.
[7] D. Zhang, S. Chen , Z. Zhou, and Q. Yang, “Constraint Projections for Ensemble Learning,” Proc. 23rd AAAI Conf. Artificial Intelligence (AAAI '08), pp.758763, 2008.
[8] Y. Liu and X. Yao, “Ensemble Learning via Negative Correlation,” Neural Networks, vol. 12, no. 10, pp.13991404, 1999.
[9] M.M. Islam, X. Yao, and K. Murase, “A Constructive Algorithm for Training Cooperative Neural Network Ensembles,” IEEE Trans. Neural Networks, vol. 14, no. 4, pp.820834, 2003.
[10] X. Yao and Y. Liu, “Making Use of Population Information in Evolutionary Artificial Neural Networks,” IEEE Trans. Systems, Man, and Cybernetics, Part B, vol. 28, no. 3, pp.417425, June 1998.
[11] Z. Zhou, J. Wu, and W. Tang, “Ensembling Neural Networks: Many Could Be Better Than All,” Artificial Intelligence, vol. 137, nos.1/2, pp.239263, 2002.
[12] T.G. Dietterich, “An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization,” Machine Learning, vol. 40, no. 2, pp.139157, 2000.
[13] D.D. Margineantu and T.G. Dietterich, “Pruning Adaptive Boosting,” Proc. 14th Int'l Conf. Machine Learning, pp.211218, 1997.
[14] R.E. Banfield, L.O. Hall, K.W. Bowyer, and W.P. Kegelmeyer, “Ensemble Diversity Measures and Their Application to Thinning,” Information Fusion, vol. 6, no. 1, pp.4962, 2005.
[15] Y. Kim, W.N. Street, and F. Menczer, “MetaEvolutionary Ensembles,” Proc. 2002 Int'l Joint Conf. Neural Networks, vol. 3, pp.27912796, 2002.
[16] H. Chen, P. Tino, and X. Yao, “A Probabilistic Ensemble Pruning Algorithm,” Proc. Sixth IEEE Int'l Conf. Data Mining Workshops OptimizationBased Data Mining Techniques with Applications, pp.878882, 2006.
[17] L. Breiman, “Stacked Regressions,” Machine Learning, vol. 24, no. 1, pp.4964, 1996.
[18] S. Hashem, “Optimal Linear Combinations of Neural Networks,” PhD dissertation, Purdue Univ. 1993.
[19] M. LeBlanc and R. Tibshirani, “Combining Estimates in Regression and Classification,” J. Am. Statistical Assoc., vol. 91, no. 436, pp.16411650, 1996.
[20] T.P. Minka, “Expectation Propagation for Approximate Bayesian Inference,” Proc. 17th Conf. Uncertainty in Artificial Intelligence (UAI'01), pp.362369, 2001.
[21] N.V. Chawla, L.O. Hall, K.W. Bowyer, and W.P. Kegelmeyer, “Learning Ensembles from Bites: A Scalable and Accurate Approach,” J. Machine Learning Research, vol. 5, pp.421451, 2004.
[22] A. Prodromidis and P. Chan, “MetaLearning in a Distributed Data Mining System: Issues and Approaches,” Proc. 14th Int'l Conf. Machine Learning, pp.211218, 1998.
[23] Y. Zhang, S. Burer, and W.N. Street, “Ensemble Pruning via Semidefinite Programming,” J. Machine Learning Research, vol. 7, pp.13151338, 2006.
[24] J.M. Bates and C.W.J. Granger, “The Combination of Forecasts,” Operations Research, vol. 20, pp.451468, 1969.
[25] J.A. Benediktsson, J.R. Sveinsson, O.K. Ersoy, and P.H. Swain, “Parallel Consensual Neural Networks,” IEEE Trans. Neural Networks, vol. 8, no. 1, pp.5464, Jan. 1997.
[26] N. Ueda, “Optimal Linear Combination of Neural Networks for Improving Classification Performance,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 2, pp.207215, Feb. 2000.
[27] A. Demiriz, K.P. Bennett, and J. ShaweTaylor, “Linear Programming Boosting via Column Generation,” Machine Learning, vol. 46, nos.13, pp.225254, 2002.
[28] M.E. Tipping, “Sparse Bayesian Learning and the Relevance Vector Machine,” J. Machine Learning Research, vol. 1, pp.211244, 2001.
[29] A. Faul and M. Tipping, “Analysis of Sparse Bayesian Learning,” Advances in Neural Information Processing Systems, vol. 14, pp.383389, 2002.
[30] Y. Qi, T.P. Minka, R.W. Picard, and Z. Ghahramani, “Predictive Automatic Relevance Determination by Expectation Propagation,” Proc. 21st Int'l Conf. Machine Learning (ICML '04), p.85, 2004.
[31] C. Andrieu, N.d. Freitas, A. Doucet, and M.I. Jordan, “An Introduction to MCMC for Machine Learning,” Machine Learning, vol. 50, nos.1/2, pp.543, 2003.
[32] J.V. Hansen, “Combining Predictors: Meta Machine Learning Methods and Bias/Variance and Ambiguity Decompositions,” PhD dissertation, Dept. of Computer Science, Univ. of Aarhus, 2000.
[33] G. Ridgeway, D. Madigan, and T. Richardson, “Boosting Methodology for Regression Problems,” Proc. Artificial Intelligence and Statistics, pp.152161, 1999.
[34] A. Asuncion and D. Newman, “UCI Machine Learning Repository,” http://mlearn.ics.uci.eduMLRepository.html , 2007.
[35] D. Opitz and R. Maclin, “Popular Ensemble Methods: An Empirical Study,” J. Artificial Intelligence Research, vol. 11, pp.169198, 1999.
[36] J. Demšar, “Statistical Comparisons of Classifiers over Multiple Data Sets,” J. Machine Learning Research, vol. 7, pp.130, 2006.
[37] M. Friedman, “The Use of Ranks to Avoid the Assumption of Normality Implicit in the Analysis of Variance,” J. Am. Statistical Assoc., vol. 32, pp.675701, 1937.
[38] R.L. Iman and J.M. Davenport, “Approximations of the Critical Region of the Friedman Statistic,” Comm. Statistics, pp.571595, 1980.
[39] O.J. Dunn, “Multiple Comparisons among Means,” J. Am. Statistical Assoc., vol. 56, pp.5264, 1961.