The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - March (2014 vol.26)
pp: 549-562
Mehmet Fatih Amasyali , Yildiz Technical University, Istanbul
Okan K. Ersoy , Purdue University, West Lafayette
ABSTRACT
The extended space forest is a new method for decision tree construction in which training is done with input vectors including all the original features and their random combinations. The combinations are generated with a difference operator applied to random pairs of original features. The experimental results show that extended space versions of ensemble algorithms have better performance than the original ensemble algorithms. To investigate the success dynamics of the extended space forest, the individual accuracy and diversity creation powers of ensemble algorithms are compared. The Extended Space Forest creates more diversity when it uses all the input features than Bagging and Rotation Forest. It also results in more individual accuracy when it uses random selection of the features than Random Subspace and Random Forest methods. It needs more training time because of using more features than the original algorithms. But its testing time is lower than the others because it generates less complex base learners.
INDEX TERMS
Training, Bagging, Heuristic algorithms, Classification algorithms, Testing, Decision trees, Accuracy,supervised learning, Classifier ensembles, committees of learners, consensus theory, ensemble algorithms, mixtures of experts, multiple classifier systems, extended spaces, bagging, random forest, random subspace, rotation forest, decision trees
CITATION
Mehmet Fatih Amasyali, Okan K. Ersoy, "Classifier Ensembles with the Extended Space Forest", IEEE Transactions on Knowledge & Data Engineering, vol.26, no. 3, pp. 549-562, March 2014, doi:10.1109/TKDE.2013.9
REFERENCES
[1] L.I. Kuncheva and C.J. Whitaker, "Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy," Machine Learning, vol. 51, no. 2, pp. 181-207, 2003.
[2] G. Brown, J.L. Wyatt, and P. Tino, "Managing Diversity in Regression Ensembles," The J. Machine Learning Research, vol. 6, pp. 1621-1650, 2005.
[3] L. Breiman, "Bagging Predictors," Machine Learning, vol. 24, no. 2, pp. 123-140, 1996.
[4] Y. Freund and R.E. Schapire, "Experiments with a New Boosting Algorithm," Proc. 13th Int'l Conf. Machine Learning, pp. 148-156, 1996.
[5] T.K. Ho, "The Random Subspace Method for Constructing Decision Forests," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 8, pp. 832-844, Aug. 1998.
[6] L. Breiman, "Random Forests," Machine Learning, vol. 45, no. 1, pp. 5-32, 2001.
[7] J.J. Rodriguez and C.J. Alonso, "Rotation-Based Ensembles," Proc. 10th Conf. Spanish Assoc. Artificial Intelligence, pp. 498-506, 2004.
[8] R. Polikar, "Ensemble Based Systems in Decision Making," IEEE Circuits and Systems Magazine, vol. 6, no. 3, pp. 21-45, Third Quarter 2006.
[9] L. Rokach, "Taxonomy for Characterizing Ensemble Methods in Classification Tasks: A Review and Annotated Bibliography," Computational Statistics & Data Analysis, vol. 53, no. 12, pp. 4046-4072, 2009.
[10] J.J. Rodriguez, C.J. Alonso, and O.J. Prieto, "Bias and Variance of Rotation-Based Ensembles," Proc. Eighth Int'l Conf. Artificial Neural Networks: Computational Intelligence and Bioinspired Systems, pp. 779-786, 2005.
[11] J.J. Rodriguez, L.I. Kuncheva, and C.J. Alonso, "Rotation Forest: A New Classifier Ensemble Method," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 10, pp. 1619-1630, Oct. 2006.
[12] L.I. Kuncheva and J.J. Rodríguez, "An Experimental Study on Rotation Forest Ensembles," Proc. Seventh Int'l Conf. Multiple Classifier Systems, pp. 459-468, 2007.
[13] C.-X. Zhang and J.-S. Zhang, "RotBoost: A Technique for Combining Rotation Forest and AdaBoost," Pattern Recognition Letters, vol. 29, pp. 1524-1536, 2008.
[14] C.-X. Zhang and J.-S. Zhang, "A Novel Method for Constructing Ensemble Classifiers," Statistics and Computing, vol. 19, no. 3, pp. 317-327, 2009.
[15] C.-X. Zhang and J.-S. Zhang, "A Variant of Rotation Forest for Constructing Ensemble Classifiers," Pattern Analysis & Applications, vol. 13, no. 1, pp. 59-77, 2010.
[16] M.G. Genton, N. Cristianini, J. Shawe-taylor, and R. Williamson, "Classes of Kernels for Machine Learning: A Statistics Perspective," The J. Machine Learning Research, vol. 2, pp. 299-312, 2001.
[17] D.D. Margineantu and T.G. Dietterich, "Pruning Adaptive Boosting," Proc. 14th Int'l Conf. Machine Learning, pp. 211-218, 1997.
[18] A. Frank and A. Asuncion, UCI Machine Learning Repository, http://archive.ics.uci.eduml, 2010.
[19] L. Breiman, J. Friedman, R. Olshen, and C. Stone, Classification and Regression Trees. Chapman & Hall, 1984.
[20] E. Alpaydin, "Combined 5 × 2 cv F Test for Comparing Supervised Classification Learning Algorithms," Neural Computation, vol. 11, no. 9, pp. 1885-1892, 1999.
[21] J. Demsar, "Statistical Comparison of Classifiers over Multiple Data Sets," The J. Machine Learning Research, vol. 7, pp. 1-30, 2006.
[22] O.T. Yildiz and E. Alpaydin, "Omnivariate Decision Trees," IEEE Trans. Neural Networks, vol. 12, no. 6, pp. 1539-1546, Nov. 2001.
38 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool