The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.02 - February (2009 vol.31)
pp: 210-227
John Wright , University of Illinois at Urbana-Champaign, Urbana
Allen Y. Yang , University of California, Berkeley, Berkeley
Arvind Ganesh , University of Illinois at Urbana-Champaign, Urbana
S. Shankar Sastry , UC Berkley UC Berkley, Berkeley
Yi Ma , University of Illinois at Urbana-Champaign, Urbana
ABSTRACT
We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by \ell^{1}-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as Eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims.
INDEX TERMS
Face recognition, feature extraction, occlusion and corruption, sparse representation, compressed sensing, \ell^{1}--minimization, validation and outlier rejection.
CITATION
John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, Yi Ma, "Robust Face Recognition via Sparse Representation", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.31, no. 2, pp. 210-227, February 2009, doi:10.1109/TPAMI.2008.79
REFERENCES
[1] J. Rissanen, “Modeling by Shortest Data Description,” Automatica, vol. 14, pp. 465-471, 1978.
[2] M. Hansen and B. Yu, “Model Selection and the Minimum Description Length Principle,” J. Am. Statistical Assoc., vol. 96, pp.746-774, 2001.
[3] A. d'Aspremont, L.E. Ghaoui, M. Jordan, and G. Lanckriet, “A Direct Formulation of Sparse PCA Using Semidefinite Programming,” SIAM Rev., vol. 49, pp. 434-448, 2007.
[4] K. Huang and S. Aviyente, “Sparse Representation for Signal Classification,” Neural Information Processing Systems, 2006.
[5] V. Vapnik, The Nature of Statistical Learning Theory. Springer, 2000.
[6] T. Cover, “Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognition,” IEEE Trans. Electronic Computers, vol. 14, no. 3, pp. 326-334, 1965.
[7] B. Olshausen and D. Field, “Sparse Coding with an Overcomplete Basis Set: A Strategy Employed by V1?” Vision Research, vol. 37, pp. 3311-3325, 1997.
[8] T. Serre, “Learning a Dictionary of Shape-Components in Visual Cortex: Comparison with Neurons, Humans and Machines,” PhD dissertation, MIT, 2006.
[9] D. Donoho, “For Most Large Underdetermined Systems of Linear Equations the Minimal $l_{1}\hbox{-}{\rm Norm}$ Solution Is Also the Sparsest Solution,” Comm. Pure and Applied Math., vol. 59, no. 6, pp. 797-829, 2006.
[10] E. Candès, J. Romberg, and T. Tao, “Stable Signal Recovery from Incomplete and Inaccurate Measurements,” Comm. Pure and Applied Math., vol. 59, no. 8, pp. 1207-1223, 2006.
[11] E. Candès and T. Tao, “Near-Optimal Signal Recovery from Random Projections: Universal Encoding Strategies?” IEEE Trans. Information Theory, vol. 52, no. 12, pp. 5406-5425, 2006.
[12] P. Zhao and B. Yu, “On Model Selection Consistency of Lasso,” J.Machine Learning Research, no. 7, pp. 2541-2567, 2006.
[13] E. Amaldi and V. Kann, “On the Approximability of Minimizing Nonzero Variables or Unsatisfied Relations in Linear Systems,” Theoretical Computer Science, vol. 209, pp. 237-260, 1998.
[14] R. Tibshirani, “Regression Shrinkage and Selection via the LASSO,” J. Royal Statistical Soc. B, vol. 58, no. 1, pp. 267-288, 1996.
[15] E. Candès, “Compressive Sampling,” Proc. Int'l Congress of Mathematicians, 2006.
[16] A. Martinez, “Recognizing Imprecisely Localized, Partially Occluded, and Expression Variant Faces from a Single Sample per Class,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 6, pp. 748-763, June 2002.
[17] B. Park, K. Lee, and S. Lee, “Face Recognition Using Face-ARG Matching,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 12, pp. 1982-1988, Dec. 2005.
[18] R. Duda, P. Hart, and D. Stork, Pattern Classification, second ed. John Wiley & Sons, 2001.
[19] J. Ho, M. Yang, J. Lim, K. Lee, and D. Kriegman, “Clustering Appearances of Objects under Varying Illumination Conditions,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, pp.11-18, 2003.
[20] S. Li and J. Lu, “Face Recognition Using the Nearest Feature Line Method,” IEEE Trans. Neural Networks, vol. 10, no. 2, pp. 439-443, 1999.
[21] P. Sinha, B. Balas, Y. Ostrovsky, and R. Russell, “Face Recognition by Humans: Nineteen Results All Computer Vision Researchers Should Know about,” Proc. IEEE, vol. 94, no. 11, pp. 1948-1962, 2006.
[22] W. Zhao, R. Chellappa, J. Phillips, and A. Rosenfeld, “Face Recognition: A Literature Survey,” ACM Computing Surveys, pp.399-458, 2003.
[23] M. Turk and A. Pentland, “Eigenfaces for Recognition,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, 1991.
[24] P. Belhumeur, J. Hespanda, and D. Kriegman, “Eigenfaces versus Fisherfaces: Recognition Using Class Specific Linear Projection,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711-720, July 1997.
[25] X. He, S. Yan, Y. Hu, P. Niyogi, and H. Zhang, “Face Recognition Using Laplacianfaces,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 3, pp. 328-340, Mar. 2005.
[26] J. Kim, J. Choi, J. Yi, and M. Turk, “Effective Representation Using ICA for Face Recognition Robust to Local Distortion and Partial Occlusion,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 12, pp. 1977-1981, Dec. 2005.
[27] S. Li, X. Hou, H. Zhang, and Q. Cheng, “Learning Spatially Localized, Parts-Based Representation,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, pp. 1-6, 2001.
[28] A. Leonardis and H. Bischof, “Robust Recognition Using Eigenimages,” Computer Vision and Image Understanding, vol. 78, no. 1, pp. 99-118, 2000.
[29] F. Sanja, D. Skocaj, and A. Leonardis, “Combining Reconstructive and Discriminative Subspace Methods for Robust Classification and Regression by Subsampling,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 3, Mar. 2006.
[30] R. Basri and D. Jacobs, “Lambertian Reflection and Linear Subspaces,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 25, no. 3, pp. 218-233, Mar. 2003.
[31] H. Wang, S. Li, and Y. Wang, “Generalized Quotient Image,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, pp. 498-505, 2004.
[32] K. Lee, J. Ho, and D. Kriegman, “Acquiring Linear Subspaces for Face Recognition under Variable Lighting,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 5, pp. 684-698, May 2005.
[33] D. Donoho and M. Elad, “Optimal Sparse Representation in General (Nonorthogonal) Dictionaries via $\ell^{1}$ Minimization,” Proc. Nat'l Academy of Sciences, pp. 2197-2202, Mar. 2003.
[34] S. Chen, D. Donoho, and M. Saunders, “Atomic Decomposition by Basis Pursuit,” SIAM Rev., vol. 43, no. 1, pp. 129-159, 2001.
[35] D. Donoho and Y. Tsaig, “Fast Solution of $\ell^{1}\hbox{-}{\rm Norm}$ Minimization Problems when the Solution May Be Sparse,” preprint, http://www.stanford.edu/~tsaigresearch.html , 2006.
[36] D. Donoho, “Neighborly Polytopes and Sparse Solution of Underdetermined Linear Equations,” Technical Report 2005-4, Dept. of Statistics, Stanford Univ., 2005.
[37] Y. Sharon, J. Wright, and Y. Ma, “Computation and Relaxation of Conditions for Equivalence between $\ell^{1}$ and $\ell^{0}$ Minimization,” CSL Technical Report UILU-ENG-07-2208, Univ. of Illinois, Urbana-Champaign, 2007.
[38] D. Donoho, “For Most Large Underdetermined Systems of Linear Equations the Minimal $\ell^{1}\hbox{-}{\rm Norm}$ Near Solution Approximates the Sparest Solution,” Comm. Pure and Applied Math., vol. 59, no. 10, 907-934, 2006.
[39] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge Univ. Press, 2004.
[40] E. Candes and J. Romberg, “$\ell^{1}\hbox{-}{\rm Magic}$ : Recovery of Sparse Signals via Convex Programming,” http://www.acm.caltech. edul1magic/, 2005.
[41] M. Savvides, R. Abiantun, J. Heo, S. Park, C. Xie, and B. Vijayakumar, “Partial and Holistic Face Recognition on FRGC-II Data Using Support Vector Machine Kernel Correlation Feature Analysis,” Proc. Conf. Computer Vision and Pattern Recognition Workshop (CVPR), 2006.
[42] C. Liu, “Capitalize on Dimensionality Increasing Techniques for Improving Face Recognition Grand Challenge Performance,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 5, pp.725-737, May 2006.
[43] P. Phillips, W. Scruggs, A. O'Tools, P. Flynn, K. Bowyer, C. Schott, and M. Sharpe, “FRVT 2006 and ICE 2006 Large-Scale Results,” Technical Report NISTIR 7408, NIST, 2007.
[44] D. Donoho and J. Tanner, “Counting Faces of Randomly Projected Polytopes When the Projection Radically Lowers Dimension,” preprint, http://www.math.utah.edu~tanner/, 2007.
[45] H. Rauhut, K. Schnass, and P. Vandergheynst, “Compressed Sensing and Redundant Dictionaries,” to appear in IEEE Trans. Information Theory, 2007.
[46] D. Donoho, “High-Dimensional Data Analysis: The Curses and Blessings of Dimensionality,” AMS Math Challenges Lecture, 2000.
[47] S. Kaski, “Dimensionality Reduction by Random Mapping,” Proc. IEEE Int'l Joint Conf. Neural Networks, vol. 1, pp. 413-418, 1998.
[48] D. Achlioptas, “Database-Friendly Random Projections,” Proc. ACM Symp. Principles of Database Systems, pp. 274-281, 2001.
[49] E. Bingham and H. Mannila, “Random Projection in Dimensionality Reduction: Applications to Image and Text Data,” Proc. ACM SIGKDD Int'l Conf. Knowledge Discovery and Data Mining, pp. 245-250, 2001.
[50] R. Baraniuk and M. Wakin, “Random Projections of Smooth Manifolds,” Foundations of Computational Math., 2007.
[51] R. Baraniuk, M. Davenport, R. de Vore, and M. Wakin, “The Johnson-Lindenstrauss Lemma Meets Compressed Sensing,” Constructive Approximation, 2007.
[52] F. Macwilliams and N. Sloane, The Theory of Error-Correcting Codes. North Holland Publishing Co., 1981.
[53] J. Kim, J. Choi, J. Yi, and M. Turk, “Effective Representation Using ICA for Face Recognition Robust to Local Distortion and Partial Occlusion,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 12, pp. 1977-1981, Dec. 2005.
[54] S. Li, X. Hou, H. Zhang, and Q. Cheng, “Learning Spatially Localized, Parts-Based Representation,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 1-6, 2001.
[55] T. Ahonen, A. Hadid, and M. Pietikainen, “Face Description with Local Binary Patterns: Application to Face Recognition,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 12, pp.2037-2041, Dec. 2006.
[56] M. Lades, J. Vorbruggen, J. Buhmann, J. Lange, C. von der Malsburg, R. Wurtz, and W. Konen, “Distortion Invariant Object Recognition in the Dynamic Link Architecture,” IEEE Trans. Computers, vol. 42, pp. 300-311, 1993.
[57] A. Pentland, B. Moghaddam, and T. Starner, “View-Based and Modular Eigenspaces for Face Recognition,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, 1994.
[58] A. Georghiades, P. Belhumeur, and D. Kriegman, “From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 6, pp. 643-660, June 2001.
[59] K. Lee, J. Ho, and D. Kriegman, “Acquiring Linear Subspaces for Face Recognition under Variable Lighting,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 5, pp. 684-698, 2005.
[60] A. Martinez and R. Benavente, “The AR Face Database,” CVC Technical Report 24, 1998.
[61] D. Geiger, T. Liu, and M. Donahue, “Sparse Representations for Image Decompositions,” Int'l J. Computer Vision, vol. 33, no. 2, 1999.
[62] R. Zass and A. Shashua, “Nonnegative Sparse PCA,” Proc. Neural Information and Processing Systems, 2006.
5 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool