Issue No. 01 - January (2007 vol. 29)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TPAMI.2007.2
Robert E. Banfield , IEEE
Lawrence O. Hall , IEEE
Kevin W. Bowyer , IEEE
W.P. Kegelmeyer , IEEE
We experimentally evaluate bagging and seven other randomization-based approaches to creating an ensemble of decision tree classifiers. Statistical tests were performed on experimental results from 57 publicly available data sets. When cross-validation comparisons were tested for statistical significance, the best method was statistically more accurate than bagging on only eight of the 57 data sets. Alternatively, examining the average ranks of the algorithms across the group of data sets, we find that boosting, random forests, and randomized trees are statistically significantly better than bagging. Because our results suggest that using an appropriate ensemble size is important, we introduce an algorithm that decides when a sufficient number of classifiers has been created for an ensemble. Our algorithm uses the out-of-bag error estimate, and is shown to result in an accurate ensemble for those methods that incorporate bagging into the construction of the ensemble.
Classifier ensembles, bagging, boosting, random forests, random subspaces, performance evaluation.
W. Kegelmeyer, R. E. Banfield, L. O. Hall and K. W. Bowyer, "A Comparison of Decision Tree Ensemble Creation Techniques," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 29, no. , pp. 173-180, 2007.