The Community for Technology Leaders
Green Image
Issue No. 06 - June (2009 vol. 31)
ISSN: 0162-8828
pp: 1074-1086
Xuejun Liao , Duke University, Durham
Hui Li Carin , Signal Innovations Group, Inc., Durham
Lawrence Carin , Duke University, Durham
Qiuhua Liu , Duke University, Durham
Jason R. Stack , Office of Naval Research, Arlington
ABSTRACT
Context plays an important role when performing classification, and in this paper we examine context from two perspectives. First, the classification of items within a single task is placed within the context of distinct concurrent or previous classification tasks (multiple distinct data collections). This is referred to as multi-task learning (MTL), and is implemented here in a statistical manner, using a simplified form of the Dirichlet process. In addition, when performing many classification tasks one has simultaneous access to all unlabeled data that must be classified, and therefore there is an opportunity to place the classification of any one feature vector within the context of all unlabeled feature vectors; this is referred to as semi-supervised learning. In this paper we integrate MTL and semi-supervised learning into a single framework, thereby exploiting two forms of contextual information. Example results are presented on a "toy" example, to demonstrate the concept, and the algorithm is also applied to three real data sets.
INDEX TERMS
Machine learning, Pattern Recognition
CITATION
Xuejun Liao, Hui Li Carin, Lawrence Carin, Qiuhua Liu, Jason R. Stack, "Semisupervised Multitask Learning", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 31, no. , pp. 1074-1086, June 2009, doi:10.1109/TPAMI.2008.296
106 ms
(Ver 3.3 (11022016))