The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.06 - June (2004 vol.16)
pp: 727-739
Sudha Ram , IEEE
ABSTRACT
<p><b>Abstract</b>—While <it>decision tree</it> techniques have been widely used in <it>classification</it> applications, a shortcoming of many decision tree inducers is that they do not learn intermediate concepts, i.e., at each node, only one of the original features is involved in the branching decision. Combining other classification methods, which learn intermediate concepts, with decision tree inducers can produce more flexible decision boundaries that separate different classes, potentially improving classification accuracy. We propose a generic algorithm for <it>cascade generalization</it> of decision tree inducers with the <it>maximum cascading depth</it> as a parameter to constrain the degree of cascading. Cascading methods proposed in the past, i.e., <it>loose coupling</it> and <it>tight coupling</it>, are strictly special cases of this new algorithm. We have empirically evaluated the proposed algorithm using logistic regression and C4.5 as base inducers on 32 UCI data sets and found that neither loose coupling nor tight coupling is always the best cascading strategy and that the maximum cascading depth in the proposed algorithm can be tuned for better classification accuracy. We have also empirically compared the proposed algorithm and <it>ensemble methods</it> such as <it>bagging</it> and <it>boosting</it> and found that the proposed algorithm performs marginally better than bagging and boosting on the average.</p>
INDEX TERMS
Machine learning, data mining, classification, decision tree, cascade generalization.
CITATION
Huimin Zhao, Sudha Ram, "Constrained Cascade Generalization of Decision Trees", IEEE Transactions on Knowledge & Data Engineering, vol.16, no. 6, pp. 727-739, June 2004, doi:10.1109/TKDE.2004.3
17 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool