Subscribe
Issue No.09 - Sept. (2012 vol.34)
pp: 1681-1690
A. Eriksson , Dept. of Comput. Sci., Univ. of Adelaide, North Terrace, SA, Australia
A. van den Hengel , Dept. of Comput. Sci., Univ. of Adelaide, North Terrace, SA, Australia
ABSTRACT
The calculation of a low-rank approximation to a matrix is fundamental to many algorithms in computer vision and other fields. One of the primary tools used for calculating such low-rank approximations is the Singular Value Decomposition, but this method is not applicable in the case where there are outliers or missing elements in the data. Unfortunately, this is often the case in practice. We present a method for low-rank matrix approximation which is a generalization of the Wiberg algorithm. Our method calculates the rank-constrained factorization, which minimizes the L1 norm and does so in the presence of missing data. This is achieved by exploiting the differentiability of linear programs, and results in an algorithm can be efficiently implemented using existing optimization software. We show the results of experiments on synthetic and real data.
INDEX TERMS
Robustness, Approximation algorithms, Equations, Least squares approximation, Computational efficiency, Optimization, L_{{1}}-minimization., Low-rank matrix approximation
CITATION
A. Eriksson, A. van den Hengel, "Efficient Computation of Robust Weighted Low-Rank Matrix Approximations Using the L_1 Norm", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.34, no. 9, pp. 1681-1690, Sept. 2012, doi:10.1109/TPAMI.2012.116
REFERENCES
 [1] C. Tomasi and T. Kanade, "Shape and Motion from Image Streams under Orthography: A Factorization Method," Int'l J. Computer Vision, vol. 9, no. 2, pp. 137-154, 1992. [2] H.-Y. Shum, K. Ikeuchi, and R. Reddy, "Principal Component Analysis with Missing Data and Its Application to Polyhedral Object Modeling," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 17, no. 1, pp. 854-867, Sept. 1995. [3] Q. Ke and T. Kanade, "A Subspace Approach to Layer Extraction," Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, 2001. [4] M. Turk and A. Pentland, "Eigenfaces for Recognition," J. Cognitive Neuroscience, vol. 3, no. 1, pp. 71-86, 1991. [5] H. Hayakawa, "Photometric Stereo under a Light Source with Arbitrary Motion," J. Optical Soc. of Am. A, vol. 11, no. 11, pp. 3079-3089, 1992. [6] T. Wiberg, "Computation of Principal Components When Data Are Missing," Proc. Second Symp. Computational Statistics, pp. 229-236, 1976. [7] A.M. Buchanan and A.W. Fitzgibbon, "Damped Newton Algorithms for Matrix Factorization with Missing Data," Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 316-322, 2005. [8] T. Okatani and K. Deguchi, "On the Wiberg Algorithm for Matrix Factorization in the Presence of Missing Components," Int'l J. Computer Vision, vol. 72, no. 3, pp. 329-337, 2007. [9] P. Baldi and K. Hornik, "Neural Networks and Principal Component Analysis: Learning from Examples without Local Minima," Neural Networks, vol. 2, no. 1, pp. 53-58, 1989. [10] E. Oja, "A Simplified Neuron Model as a Principal Component Analyzer," J. Math. Biology, vol. 15, pp. 267-273, 1982. [11] F. De La Torre and M.J. Black, "A Framework for Robust Subspace Learning," Int'l J. Computer Vision, vol. 54, nos. 1-3, pp. 117-142, 2003. [12] H. Aanaes, R. Fisker, K. Astrom, and J.M. Carstensen, "Robust Factorization," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 9, pp. 1215-1225, Sept. 2002. [13] Q. Ke and T. Kanade, "Robust $L_1$ Norm Factorization in the Presence of Outliers and Missing Data by Alternative Convex Programming," Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, pp. 739-746, 2005. [14] M.J. Black and A.D. Jepson, "Eigentracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation," Int'l J. Computer Vision, vol. 26, pp. 329-342, 1998. [15] C. Croux and P. Filzmoser, "Robust Factorization of a Data Matrix," Proc. Computational Statistics, pp. 245-249, 1998. [16] E.J. Candès, X. Li, Y. Ma, and J. Wright, "Robust Principal Component Analysis?" Computing Research Repository, vol. abs/0912.3599, 2009. [17] M.K. Chandraker and D.J. Kriegman, "Globally Optimal Bilinear Programming for Computer Vision Applications," Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2008. [18] A. Bjorck, Numerical Methods for Least Squares Problems. SIAM, 1995. [19] A. Eriksson and A. van den Hengel, "Efficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data Using the $l_1$ Norm," Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2010. [20] G.A. Watson, Approximation Theory and Numerical Methods. Wiley, 1980. [21] R. Fletcher, "A Model Algorithm for Composite Nondifferentiable Optimization Problems," Math. Programming Study, vol. 17, pp. 67-76, 1982. [22] R. Fletcher, Practical Methods of Optimization. second ed., Wiley-Interscience, 1987. [23] Y. Yuan, "Some Properties of Trust Region Algorithms for Nonsmooth Optimization," technical report, 1983. [24] Y. Yuan, "On the Superlinear Convergence of a Trust Region Algorithm for Nonsmooth Optimization," Math. Programming, vol. 31, no. 3, pp. 269-285, 1985. [25] C. Olsson and F. Kahl, "Generalized Convexity in Multiple View Geometry," J. Math. Imaging and Vision, vol. 38, pp. 35-51, 2010. [26] M. Alvira and R. Rifkin, "An Empirical Comparison of SNoW and SVMs for Face Detection," A.I. memo 2001-004, Center for Biological and Computational Learning, MIT, 2001. [27] L. Torresani, A. Hertzmann, and C. Bregler, "Learning Non-Rigid 3D Shape from 2D Motion," Proc. Neural Information Processing Systems, 2003.