The Community for Technology Leaders
Facial dynamics can be considered as unique signatures for discrimination between people. These have started to become important topic since many devices have the possibility of unlocking using face recognition or verification. In this work, we evaluate the efficacy of the transition frames of video in emotion as compared to the peak emotion frames for identification. For experiments with transition frames we extract features from each frame of the video from a fine-tuned VGG-Face Convolutional Neural Network (CNN) and geometric features from facial landmark points. To model the temporal context of the transition frames we train a Long-Short Term Memory (LSTM) on the geometric and the CNN features. Furthermore, we employ two fusion strategies: first, an early fusion, in which the geometric and the CNN features are stacked and fed to the LSTM. Second, a late fusion, in which the prediction of the LSTMs, trained independently on the two features, are stacked and used with a Support Vector Machine (SVM). Experimental results show that the late fusion strategy gives the best results and the transition frames give better identification results as compared to the peak emotion frames.
Face recognition, Face, Databases, Glass, Feature extraction, Recurrent neural networks, Light sources

R. E. Haamer et al., "Changes in Facial Expression as Biometric: A Database and Benchmarks of Identification," 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018)(FG), Xi'an, China, 2018, pp. 621-628.
160 ms
(Ver 3.3 (11022016))