The Community for Technology Leaders
Green Image
This paper presents a method for designing semi-supervised classifiers trained on labeled and unlabeled samples. We focus on probabilistic semi-supervised classifier design for multi-class and singlelabeled classification problems, and propose a hybrid approach that takes advantage of generative and discriminative approaches. In our approach, we first consider a generative model trained by using labeled samples and introduce a bias correction model, where these models belong to the same model family, but have different parameters. Then, we construct a hybrid classifier by combining these models based on the maximum entropy principle. To enable us to apply our hybrid approach to text classification problems, we employed naive Bayes models as the generative and bias correction models. Our experimental results for four text data sets confirmed that the generalization ability of our hybrid classifier was much improved by using a large number of unlabeled samples for training when there were too few labeled samples to obtain good performance. We also confirmed that our hybrid approach significantly outperformed generative and discriminative approaches when the performance of the generative and discriminative approaches was comparable. Moreover, we examined the performance of our hybrid classifier when the labeled and unlabeled data distributions were different.
generative model, maximum entropy principle, bias correction, unlabeled samples, text classification

A. Fujino, K. Saito and N. Ueda, "Semisupervised Learning for a Hybrid Generative/Discriminative Classifier based on the Maximum Entropy Principle," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 30, no. , pp. 424-437, 2007.
79 ms
(Ver 3.3 (11022016))