The Community for Technology Leaders
Green Image
Issue No. 01 - Jan.-March (2013 vol. 4)
ISSN: 1949-3045
pp: 69-82
Hoda Mohammadzade , University of Toronto, Toronto
Dimitrios Hatzinakos , University of Toronto, Toronto
Discriminant analysis methods are powerful tools for face recognition. However, these methods cannot be used for the single sample per person scenario because the within-subject variability cannot be estimated in this case. In the generic learning solution, this variability is estimated using images of a generic training set, for which more than one sample per person is available. However, because of rather poor estimation of the within-subject variability using a generic set, the performance of discriminant analysis methods is yet to be satisfactory. This problem particularly exists when images are under drastic facial expression variation. In this paper, we show that images with the same expression are located on a common subspace, which here we call it the expression subspace. We show that by projecting an image with an arbitrary expression into the expression subspaces, we can synthesize new expression images. By means of the synthesized images for subjects with one image sample, we can obtain more accurate estimation of the within-subject variability and achieve significant improvement in recognition. We performed comprehensive experiments on two large face databases: the Face Recognition Grand Challenge and the Cohn-Kanade AU-Coded Facial Expression database to support the proposed methodology.
Face recognition, Training, Databases, Eigenvalues and eigenfunctions, generic training, Face recognition, Training, Databases, Eigenvalues and eigenfunctions, single sample per person, Face recognition, facial expression, expression variation, expression transformation, expression subspace, LDA
Hoda Mohammadzade, Dimitrios Hatzinakos, "Projection into Expression Subspaces for Face Recognition from Single Sample per Person", IEEE Transactions on Affective Computing, vol. 4, no. , pp. 69-82, Jan.-March 2013, doi:10.1109/T-AFFC.2012.30
91 ms
(Ver 3.3 (11022016))