The Community for Technology Leaders
Green Image
Issue No. 03 - July-Sept. (2014 vol. 5)
ISSN: 1949-3045
pp: 314-326
Hector P. Martinez , Institute of Digital Games, University of Malta, Msida, Malta
Georgios N. Yannakakis , Institute of Digital Games, University of Malta, Msida, Malta
John Hallam , Maersk Mc-Kinney Moller Institute, University of Southern Denmark, Odense, Denmark
ABSTRACT
How should affect be appropriately annotated and how should machine learning best be employed to map manifestations of affect to affect annotations? What is the use of ratings of affect for the study of affective computing and how should we treat them? These are the key questions this paper attempts to address by investigating the impact of dissimilar representations of annotated affect on the efficacy of affect modelling. In particular, we compare several different binary-class and pairwise preference representations for automatically learning from ratings of affect. The representations are compared and tested on three datasets: one synthetic dataset (testing “in vitro ”) and two affective datasets (testing “in vivo”). The synthetic dataset couples a number of attributes with generated rating values. The two affective datasets contain physiological and contextual user attributes, and speech attributes, respectively; these attributes are coupled with ratings of various affective and cognitive states. The main results of the paper suggest that ratings (when used) should be naturally transformed to ordinal (ranked) representations for obtaining more reliable and generalisable models of affect. The findings of this paper have a direct impact on affect annotation and modelling research but, most importantly, challenge the traditional state-of-practice in affective computing and psychometrics at large.
INDEX TERMS
Training, Computational modeling, Data models, Affective computing, Predictive models, Transforms, Numerical models
CITATION

H. P. Martinez, G. N. Yannakakis and J. Hallam, "Don’t Classify Ratings of Affect; Rank Them!," in IEEE Transactions on Affective Computing, vol. 5, no. 3, pp. 314-326, 2014.
doi:10.1109/TAFFC.2014.2352268
423 ms
(Ver 3.3 (11022016))