This Article 
 Bibliographic References 
 Add to: 
Combining Reconstructive and Discriminative Subspace Methods for Robust Classification and Regression by Subsampling
March 2006 (vol. 28 no. 3)
pp. 337-350
Linear subspace methods that provide sufficient reconstruction of the data, such as PCA, offer an efficient way of dealing with missing pixels, outliers, and occlusions that often appear in the visual data. Discriminative methods, such as LDA, which, on the other hand, are better suited for classification tasks, are highly sensitive to corrupted data. We present a theoretical framework for achieving the best of both types of methods: An approach that combines the discrimination power of discriminative methods with the reconstruction property of reconstructive methods which enables one to work on subsets of pixels in images to efficiently detect and reject the outliers. The proposed approach is therefore capable of robust classification with a high-breakdown point. We also show that subspace methods, such as CCA, which are used for solving regression tasks, can be treated in a similar manner. The theoretical results are demonstrated on several computer vision tasks showing that the proposed approach significantly outperforms the standard discriminative methods in the case of missing pixels and images containing occlusions and outliers.

[1] P.N. Belhumeur, J.P. Hespanha, and D.J. Kriegman, “Eigenfaces versus Fisherfaces: Recognition Using Class Specific Linear Projection,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711-720, July 1997.
[2] A.J. Bell and T.J. Sejnowski, “An Information Maximisation Approach to Blind Separation and Blind Deconvolution,” Neural Computation, vol. 7, no. 6, pp. 1129-1159, 1995.
[3] M. Borga and H. Knutsson, “Canonical Correlation Analysis in Early Vision Processing,” Proc. Ninth European Symp. Artificial Neural Networks, pp. 309-314, 2001.
[4] M. Borga, “Learning Multidimensional Signal Processing,” PhD thesis, Linköping Univ., Sweden, 1998.
[5] C.V. Chork and P.J. Rousseeuw, “Integrating a High-Breakdown Option into Discriminant Analysis in Exploration Geochemistry,” J. Geochemical Exploration, vol. 43, pp. 191-203, 1992.
[6] P. Comon, “Independent Component Analysis— A New Concept?” Signal Processing, vol. 36, pp. 287-314, 1994.
[7] C. Croux and C. Dehon, “Robust Linear Discrimination Analysis Using S-Estimators,” Canadian J. Statistics, vol. 29, pp. 473-492, 2001.
[8] F. De la Torre and M.J. Black, “A Framework for Robust Subspace Learning,” Int'l J. Computer Vision, vol. 54, no. 1, pp. 117-142, 2003.
[9] C. Dehon, P. Filzmoser, and C. Croux, “Robust Methods for Canonical Correlation Analysis,” Data Analysis, Classification, and Related Methods, pp. 321-326, Berlin: Springer-Verlag, 2000.
[10] RO. Duda, P.E. Hart, and D.G. Stork, Pattern Classification, second ed. Wiley-Interscience, 2000.
[11] H. Hotelling, “Analysis of a Complex of Statistical Variables into Principal Components,” J. Educational Psychology, vol. 24, pp. 417-441, 1933.
[12] R.A. Fisher, “The Use of Multiple Measurements in Taxonomic Problems,” Annals of Eugenics, vol. 7, pp. 179-188, 1936.
[13] R. Gross, I. Matthews, and S. Baker, “Fisher Light-Fields for Face Recognition across Pose and Illumination,” Proc. German Symp. Pattern Recognition (DAGM), pp. 481-489, 2002.
[14] D.M. Hawkins and G.J. McLachlan, “High-Breakdown Linear Discriminant Analysis,” J. Am. Statistical Assoc., vol. 92, pp. 136-143, 1997.
[15] X. He and W.K. Fung, “High Breakdown Estimation for Multiple Populations with Applications to Discriminant Analysis,” J. Multivariate Analysis, vol. 72, no. 2, pp. 151-162, 2000.
[16] M. Hubert and K. Van Driessen, “Fast and Robust Discriminant Analysis,” Computational Statistics and Data Analysis, vol. 45, pp. 301-320, 2003.
[17] D.D. Lee and H.S. Seung, “Algorithms for Non-Negative Matrix Factorization,” Advances in Neural Information Processing Systems, vol. 13, pp. 556-562, 2001.
[18] A. Leonardis and H. Bischof, “Robust Recognition Using Eigenimages,” Computer Vision and Image Understanding, vol. 78, no. 1, pp. 99-118, 2000.
[19] X. Lu, Y. Wang, and A.K. Jain, “Combining Classifiers for Face Recognition,” Proc. IEEE Int'l Conf. Multimedia and Expo, vol. 3, pp. 13-16, 2003.
[20] G.L. Marcialis and F. Roli, “Fusion of PCA and LDA for Face Verification,” Proc. Post-ECCV Workshop Biometric Authentication (BIOMET), pp. 30-37, 2002.
[21] A.M. Martinez and A.C. Kak, “PCA versus LDA,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp. 228-233, Feb. 2001.
[22] P. Meer, C.V. Stewart, and D.E. Tyler, “Robust Computer Vision: An Interdisciplinary Challenge, Guest Editorial,” Computer Vision and Image Understanding 78, vols. 1-7, 2000.
[23] O.L. Mangasarian and D.R. Musicant, “Robust Linear and Support Vector Regression,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 9, pp. 950-955, Sept. 2000.
[24] T. Melzer, M. Reiter, and H. Bischof, “Appearance Models Based on Kernel Canonical Correlation Analysis,” Pattern Recognition, vol. 36, no. 9, pp. 1961-1973, 2003.
[25] S.K. Nayar, H. Murase, and S.A. Nene, “Parametric Appearance Representation,” Early Visual Learning, pp. 131-160, Oxford Univ. Press, 1996.
[26] S.A. Nene, S.K. Nayar, and H. Murase, “Columbia Object Image Library (COIL-20),” Technical Report CUCS-005-96, Feb. 1996.
[27] A.M. Pires, “Robust Linear Discriminant Analysis and the Projection Pursuit Approach, Practical Aspects,” Proc. Int'l Conf, Robust Statistics, 2001.
[28] PJ. Rousseeuw, “Multivariate Estimation with High Breakdown Point,” Math. Statistics and Applications, vol. B, pp. 283-297, 1985.
[29] F. Samaria and A. Harter, “Parameterisation of a Stochastic Model for Human Face Identification,” Proc. Second IEEE Workshop Applications of Computer Vision, Dec. 1994.
[30] I. Stainvas, N. Intrator, and A. Moshaiov, “Improving Recognition via Reconstruction,” submitted.
[31] D.L. Swets and J. Weng, “Using Discriminant Eigenfeatures for Image Retrieval,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 18, no. 8, pp. 831-837, Aug. 1996.
[32] M. Turk and A. Pentland, “Eigenfaces for Recognition,” J. Cognitive Neuroscience, vol. 3, no. 1, pp. 71-86, 1991.
[33] J. Yang and J.-Y. Yang, “Why Can LDA Be Performed in PCA Transformed Space?” Pattern Recognition, vol. 36, pp. 563-566, 2003.
[34] W. Zhao, A. Krishnaswamy, R. Chellappa, D. Swets, and J. Weng, “Discriminant Analysis of Principal Components for Face Recognition,” Face Recognition: from Theory to Applications, Springer-Verlag, pp. 73-85, 1998.

Index Terms:
Subspace methods, reconstructive methods, discriminative methods, robust classification, robust regression, subsampling, PCA, LDA, CCA, high-breakdown point classification, outlier detection, occlusion.
Sanja Fidler, Danijel Skocaj, Ale? Leonardis, "Combining Reconstructive and Discriminative Subspace Methods for Robust Classification and Regression by Subsampling," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 3, pp. 337-350, March 2006, doi:10.1109/TPAMI.2006.46
Usage of this product signifies your acceptance of the Terms of Use.