The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.02 - February (2009 vol.31)
pp: 364-369
Daniel Hernández-Lobato , Universidad Autónoma de Madrid, Cantoblanco
Gonzalo Martínez-Muñoz , Universidad Autónoma de Madrid, Cantoblanco
Alberto Suárez , Escuela Politécnica Superior, Madrid
ABSTRACT
The global prediction of a homogeneous ensemble of classifiers generated in independent applications of a randomized learning algorithm on a fixed training set is analyzed within a Bayesian framework. Assuming that majority voting is used, it is possible to estimate with a given confidence level the prediction of the complete ensemble by querying only a subset of classifiers. For a particular instance that needs to be classified, the polling of ensemble classifiers can be halted when the probability that the predicted class will not change when taking into account the remaining votes is above the specified confidence level. Experiments on a collection of benchmark classification problems using representative parallel ensembles, such as bagging and random forests, confirm the validity of the analysis and demonstrate the effectiveness of the instance-based ensemble pruning method proposed.
INDEX TERMS
Ensemble learning, bagging, random forests, ensemble pruning, instance-based pruning, Polya urn.
CITATION
Daniel Hernández-Lobato, Gonzalo Martínez-Muñoz, Alberto Suárez, "Statistical Instance-Based Pruning in Ensembles of Independent Classifiers", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.31, no. 2, pp. 364-369, February 2009, doi:10.1109/TPAMI.2008.204
REFERENCES
[1] Y. Freund and R.E. Schapire, “A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting,” Proc. Second European Conf. Computational Learning Theory, pp. 23-37, 1995.
[2] L. Breiman, “Bagging Predictors,” Machine Learning, vol. 24, no. 2, pp. 123-140, 1996.
[3] T.G. Dietterich, “An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization,” Machine Learning, vol. 40, no. 2, pp. 139-157, 2000.
[4] L. Breiman, “Random Forests,” Machine Learning, vol. 45, no. 1, pp. 5-32, 2001.
[5] P. Buhlmann, “Bagging, Subagging and Bragging for Improving Some Prediction Algorithms,” Recent Advances and Trends in Nonparametric Statistics, M. Akritas and D. Politis, eds., pp. 19-34, 2003.
[6] G. Martínez-Muñoz and A. Suárez, “Switching Class Labels to Generate Classification Ensembles,” Pattern Recognition, vol. 38, no. 10, pp. 1483-1494, 2005.
[7] J.J. Rodriguez, L.I. Kuncheva, and C.J. Alonso, “Rotation Forest: A New Classifier Ensemble Method,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 10, pp. 1619-1630, Oct. 2006.
[8] R.E. Banfield, L.O. Hall, K.W. Bowyer, and W.P. Kegelmeyer, “A Comparison of Decision Tree Ensemble Creation Techniques,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 1, pp. 173-180, Jan. 2007.
[9] L.K. Hansen and P. Salamon, “Neural Network Ensembles,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 12, pp. 993-1001, 1990.
[10] L. Lam and C. Suen, “Application of Majority Voting to Pattern Recognition: An Analysis of Its Behavior and Performance,” IEEE Trans. Systems, Man, and Cybernetics, vol. 27, no. 5, pp. 553-568, 1997.
[11] R. Esposito and L. Saitta, “Monte Carlo Theory as an Explanation of Bagging and Boosting,” Proc. 18th Int'l Joint Conf. Artificial Intelligence, pp.499-504, 2003.
[12] W. Fan, F. Chu, H. Wang, and P.S. Yu, “Pruning and Dynamic Scheduling of Cost-Sensitive Ensembles,” Proc. 18th Nat'l Conf. Artificial Intelligence, pp.146-151, 2002.
[13] H. Wang, W. Fan, P.S. Yu, and J. Han, “Mining Concept-Drifting Data Streams Using Ensemble Classifiers,” Proc. Ninth ACM SIGKDD Int'l Conf. Knowledge Discovery and Data Mining, pp. 226-235, 2003.
[14] R. Schapire, Y. Freund, P. Bartlett, and W. Lee, “Boosting the Margin: A New Explanation for the Effectiveness of Voting Methods,” The Annals of Statistics, vol. 12, no. 5, pp. 1651-1686, 1998.
[15] C.M. Bishop, Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, Aug. 2006.
[16] M. Abramowitz and I.A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Dover, 1964.
[17] N.L. Johnson, S. Kotz, and N. Balakrishnan, Discrete Multivariate Distributions. John Wiley & Sons, 1997.
[18] A. Asuncion and D. Newman, “UCI Machine Learning Repository,” http://www.ics.uci.edu/mlearnMLRepository.html , 2007.
[19] L. Breiman, “Bias, Variance, and Arcing Classifiers,” Technical Report 460, Statistics Dept., Univ. of California, Berkeley, 1996.
[20] I. Hedenfalk, D. Duggan, Y. Chen, M. Radmacher, M. Bittner, R. Simon, P. Meltzer, B. Gusterson, M. Esteller, O.P. Kallioniemi, B. Wilfond, A. Borg, and J. Trent, “Gene-Expression Profiles in Hereditary Breast Cancer,” The New England J. Medicine, vol. 344, no. 8, pp. 539-548, 2001.
[21] J. Demšar, “Statistical Comparisons of Classifiers over Multiple Data Sets,” J. Machine Learning Research, vol. 7, pp. 1-30, 2006.
16 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool