2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2017)
Honolulu, Hawaii, USA
July 21, 2017 to July 26, 2017
Recent work in affective computing focused on affect from facial expressions, and not as much on body. This work focuses on body affect. Affect does not occur in isolation. Humans usually couple affect with an action; for example, a person could be running and happy. Recognizing body affect in sequences requires efficient algorithms to capture both the micro movements that differentiate between happy and sad and the macro variations between different actions. We depart from traditional approaches for time-series data analytics by proposing a multi-task learning model that learns a shared representation that is well-suited for action-affect-gender classification. For this paper we choose a probabilistic model, specifically Conditional Restricted Boltzmann Machines, to be our building block. We propose a new model that enhances the CRBM model with a factored multi-task component that enables scaling over larger number of classes without increasing the number of parameters. We evaluate our approach on two publicly available datasets, the Body Affect dataset and the Tower Game dataset, and show superior classification performance improvement over the state-of-the-art.
Probability distribution, Data models, Analytical models, Animation, Poles and towers, Games, Skeleton
Timothy J. Shields, Mohamed R. Amer, Max Ehrlich, Amir Tamrakar, "Action-Affect-Gender Classification Using Multi-task Representation Learning", 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), vol. 00, no. , pp. 2249-2258, 2017, doi:10.1109/CVPRW.2017.279