The Community for Technology Leaders
2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017) (2017)
Washington, DC, DC, USA
May 30, 2017 to June 3, 2017
ISBN: 978-1-5090-4023-0
pp: 833-838
ABSTRACT
We are proposing a new facial expression recognition model which introduces 30+ detailed facial expressions recognisable by any artificial intelligence interacting with a human. Throughout this research, we introduce two categories for the emotions, namely, dominant emotions and complementary emotions. In this research paper the complementary emotion is recognised by using the eye region if the dominant emotion is angry, fearful or sad, and if the dominant emotion is disgust or happiness the complementary emotion is mainly conveyed by the mouth. In order to verify the tagged dominant and complementary emotions, randomly chosen people voted for the recognised multi-emotional facial expressions. The average results of voting are showing that 73.88% of the voters agree on the correctness of the recognised multi-emotional facial expressions.
INDEX TERMS
CITATION
Christer Loob, Pejman Rasti, Iiris Lusi, Julio C. S. Jacques Junior, Xavier Baro, Sergio Escalera, Tomasz Sapinski, Dorota Kaminska, Gholamreza Anbarjafari, "Dominant and Complementary Multi-Emotional Facial Expression Recognition Using C-Support Vector Classification", 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), vol. 00, no. , pp. 833-838, 2017, doi:10.1109/FG.2017.106
90 ms
(Ver 3.3 (11022016))