Issue No. 12 - December (2005 vol. 17)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TKDE.2005.188
Shichao Zhang , IEEE
Many real-world data sets for machine learning and data mining contain missing values and much previous research regards it as a problem and attempts to impute missing values before training and testing. In this paper, we study this issue in cost-sensitive learning that considers both test costs and misclassification costs. If some attributes (tests) are too expensive in obtaining their values, it would be more cost-effective to miss out their values, similar to skipping expensive and risky tests (missing values) in patient diagnosis (classification). That is, "missing is useful” as missing values actually reduces the total cost of tests and misclassifications and, therefore, it is not meaningful to impute their values. We discuss and compare several strategies that utilize only known values and that "missing is useful” for cost reduction in cost-sensitive decision tree learning.
Index Terms- Induction, knowledge acquisition, machine learning.
Z. Qin, S. Sheng, S. Zhang and C. X. Ling, ""Missing Is Useful': Missing Values in Cost-Sensitive Decision Trees," in IEEE Transactions on Knowledge & Data Engineering, vol. 17, no. , pp. 1689-1693, 2005.