Issue No. 02 - February (2009 vol. 31)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TPAMI.2008.78
Gonzalo Martínez-Muñoz , Universidad Autónoma de Madrid, Cantoblanco
Daniel Hernández-Lobato , Universidad Autónoma de Madrid, Cantoblanco
Alberto Suárez , Escuela Politécnica Superior, Madrid
Several pruning strategies that can be used to reduce the size and increase the accuracy of bagging ensembles are analyzed. These heuristics select subsets of complementary classifiers that, when combined, can perform better than the whole ensemble. The pruning methods investigated are based on modifying the order of aggregation of classifiers in the ensemble. In the original bagging algorithm, the order of aggregation is left unspecified. When this order is random, the generalization error typically decreases as the number of classifiers in the ensemble increases. If an appropriate ordering for the aggregation process is devised, the generalization error reaches a minimum at intermediate numbers of classifiers. This minimum lies below the asymptotic error of bagging. Pruned ensembles are obtained by retaining a fraction of the classifiers in the ordered ensemble. The performance of these pruned ensembles is evaluated in several benchmark classification tasks under different training conditions. The results of this empirical investigation show that ordered aggregation can be used for the efficient generation of pruned ensembles that are competitive, in terms of performance and robustness of classification, with computationally more costly methods that directly select optimal or near-optimal subensembles.
Ensembles of classifiers, bagging, decision trees, ensemble selection, ensemble pruning, ordered aggregation.
A. Suárez, D. Hernández-Lobato and G. Martínez-Muñoz, "An Analysis of Ensemble Pruning Techniques Based on Ordered Aggregation," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 31, no. , pp. 245-259, 2008.