Issue No. 06 - June (2004 vol. 16)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TKDE.2004.3
Huimin Zhao , IEEE
Sudha Ram , IEEE
<p><b>Abstract</b>—While <it>decision tree</it> techniques have been widely used in <it>classification</it> applications, a shortcoming of many decision tree inducers is that they do not learn intermediate concepts, i.e., at each node, only one of the original features is involved in the branching decision. Combining other classification methods, which learn intermediate concepts, with decision tree inducers can produce more flexible decision boundaries that separate different classes, potentially improving classification accuracy. We propose a generic algorithm for <it>cascade generalization</it> of decision tree inducers with the <it>maximum cascading depth</it> as a parameter to constrain the degree of cascading. Cascading methods proposed in the past, i.e., <it>loose coupling</it> and <it>tight coupling</it>, are strictly special cases of this new algorithm. We have empirically evaluated the proposed algorithm using logistic regression and C4.5 as base inducers on 32 UCI data sets and found that neither loose coupling nor tight coupling is always the best cascading strategy and that the maximum cascading depth in the proposed algorithm can be tuned for better classification accuracy. We have also empirically compared the proposed algorithm and <it>ensemble methods</it> such as <it>bagging</it> and <it>boosting</it> and found that the proposed algorithm performs marginally better than bagging and boosting on the average.</p>
Machine learning, data mining, classification, decision tree, cascade generalization.
S. Ram and H. Zhao, "Constrained Cascade Generalization of Decision Trees," in IEEE Transactions on Knowledge & Data Engineering, vol. 16, no. , pp. 727-739, 2004.