The Community for Technology Leaders
Green Image
Issue No. 01 - January (2011 vol. 23)
ISSN: 1041-4347
pp: 64-78
Kevin Y. Yip , Yale University, New Haven
Ben Kao , The University of Hong Kong, Hong Kong
Wai-Shing Ho , The University of Hong Kong, Hong Kong
Smith Tsang , The University of Hong Kong, Hong Kong
Sau Dan Lee , The University of Hong Kong, Hong Kong
Traditional decision tree classifiers work with data whose values are known and precise. We extend such classifiers to handle data with uncertain information. Value uncertainty arises in many applications during the data collection process. Example sources of uncertainty include measurement/quantization errors, data staleness, and multiple repeated measurements. With uncertainty, the value of a data item is often represented not by one single value, but by multiple values forming a probability distribution. Rather than abstracting uncertain data by statistical derivatives (such as mean and median), we discover that the accuracy of a decision tree classifier can be much improved if the "complete information” of a data item (taking into account the probability density function (pdf)) is utilized. We extend classical decision tree building algorithms to handle data tuples with uncertain values. Extensive experiments have been conducted which show that the resulting classifiers are more accurate than those using value averages. Since processing pdfs is computationally more costly than processing single values (e.g., averages), decision tree construction on uncertain data is more CPU demanding than that for certain data. To tackle this problem, we propose a series of pruning techniques that can greatly improve construction efficiency.
Uncertain data, decision tree, classification, data mining.
Kevin Y. Yip, Ben Kao, Wai-Shing Ho, Smith Tsang, Sau Dan Lee, "Decision Trees for Uncertain Data", IEEE Transactions on Knowledge & Data Engineering, vol. 23, no. , pp. 64-78, January 2011, doi:10.1109/TKDE.2009.175
94 ms
(Ver 3.3 (11022016))