The Community for Technology Leaders
2010 IEEE International Conference on Data Mining (2010)
Sydney, Australia
Dec. 13, 2010 to Dec. 17, 2010
ISSN: 1550-4786
ISBN: 978-0-7695-4256-0
pp: 649-658
The good performances of most classical learning algorithms are generally founded on high quality training data, which are clean and unbiased. The availability of such data is however becoming much harder than ever in many real world problems due to the difficulties in collecting large scale unbiased data and precisely labeling them for training. In this paper, we propose a general Contrast Co-learning (CCL) framework to refine the biased and noisy training data when an unbiased yet unlabeled data pool is available. CCL starts with multiple sets of probably biased and noisy training data and trains a set of classifiers individually. Then under the assumption that the confidently classified data samples may have higher probabilities to be correctly classified, CCL iteratively and automatically filtering out possible data noises as well as adding those confidently classified samples from the unlabeled data pool to correct the bias. Through this process, we can generate a cleaner and unbiased training dataset with theoretical guarantees. Extensive experiments on two public text datasets clearly show that CCL consistently improves the algorithmic classification performance on biased and noisy training data compared with several state-of-the-art classical algorithms.
Noisy training data, Training data bias, Contrast Classifier, Co-learning

M. Zhang, Z. Zheng, N. Liu, Z. Chen, S. Yan and J. Yan, "A Novel Contrast Co-learning Framework for Generating High Quality Training Data," 2010 IEEE International Conference on Data Mining(ICDM), Sydney, Australia, 2010, pp. 649-658.
79 ms
(Ver 3.3 (11022016))