The Community for Technology Leaders
Green Image
Issue No. 07 - July (2011 vol. 23)
ISSN: 1041-4347
pp: 1079-1089
Grigorios Tsoumakas , Aristotle University of Thessaloniki, Thessaloniki
Ioannis Katakis , Aristotle University of Thessaloniki, Thessaloniki
Ioannis Vlahavas , Aristotle Univesity of Thessaloniki, Thessaloniki
A simple yet effective multilabel learning method, called label powerset (LP), considers each distinct combination of labels that exist in the training set as a different class value of a single-label classification task. The computational efficiency and predictive performance of LP is challenged by application domains with large number of labels and training examples. In these cases, the number of classes may become very large and at the same time many classes are associated with very few training examples. To deal with these problems, this paper proposes breaking the initial set of labels into a number of small random subsets, called labelsets and employing LP to train a corresponding classifier. The labelsets can be either disjoint or overlapping depending on which of two strategies is used to construct them. The proposed method is called {\rm RA}k{\rm EL} (RAndom k labELsets), where k is a parameter that specifies the size of the subsets. Empirical evidence indicates that {\rm RA}k{\rm EL} manages to improve substantially over LP, especially in domains with large number of labels and exhibits competitive performance against other high-performing multilabel learning methods.
Categorization, multilabel, ensembles, labelset, classification.

I. Katakis, G. Tsoumakas and I. Vlahavas, "Random k-Labelsets for Multilabel Classification," in IEEE Transactions on Knowledge & Data Engineering, vol. 23, no. , pp. 1079-1089, 2010.
93 ms
(Ver 3.3 (11022016))