The Community for Technology Leaders
2013 IEEE 13th International Conference on Data Mining (2001)
San Jose, California
Nov. 29, 2001 to Dec. 2, 2001
ISBN: 0-7695-1119-8
pp: 542
ABSTRACT
Decision trees are one of the most extensively used data mining models. Recently, a number of efficient, scalable algorithms for constructing decision trees on large disk-resident dataset have been introduced. In this paper, we study the problem of learning scalable decision trees from datasets with biased class distribution. Our objective is to build decision trees that are ore concise and ore interpretable while maintaining the scalability of the model. To achieve this, our approach searches for subspace clusters of data cases of the biased class to enable multivariate splittings based on weighted distances to such clusters. In order to build concise and interpretable models, other approaches including multivariate decision trees and association rules, often introduce scalability and performance issues. The SSDT algorithm we present achieves the objective without loss in efficiency, scalability, and accuracy.
INDEX TERMS
CITATION
Haixun Wang, Philip S. Yu, "SSDT: A Scalable Subspace-Splitting Classifier for Biased Data", 2013 IEEE 13th International Conference on Data Mining, vol. 00, no. , pp. 542, 2001, doi:10.1109/ICDM.2001.989563
99 ms
(Ver )