This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Optimal Linear Representations of Images for Object Recognition
May 2004 (vol. 26 no. 5)
pp. 662-666

Abstract—Although linear representations are frequently used in image analysis, their performances are seldom optimal in specific applications. This paper proposes a stochastic gradient algorithm for finding optimal linear representations of images for use in appearance-based object recognition. Using the nearest neighbor classifier, a recognition performance function is specified and linear representations that maximize this performance are sought. For solving this optimization problem on a Grassmann manifold, a stochastic gradient algorithm utilizing intrinsic flows is introduced. Several experimental results are presented to demonstrate this algorithm.

[1] S. Amari, Natural Gradient Works Efficiently in Learning Neural Computation, vol. 10, pp. 251-276, 1998.
[2] P.N. Belhumeur, J. Hespanda, and D. Kriegeman, Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711-720, July 1997.
[3] A. Edelman, T. Arias, and S.T. Smith, The Geometry of Algorithms with Orthogonality Constraints SIAM J. Matrix Analysis and Applications, vol. 20, no. 2, pp. 303-353, 1998.
[4] S. Fiori, A Theory for Learning by Weight Flow on Stiefel-Grassman Manifold Neural Computation, vol. 13, pp. 1625-1647, 2001.
[5] K. Gallivan, A. Srivastava, X. Liu, and P. VanDooren, Efficient Algorithms for Inferences on Grassmann Manifolds Proc. 12th IEEE Workshop Statistical Signal Processing, 2003.
[6] A. Hyvarinen, Fast and Robust Fixed-Point Algorithm for Independent Component Analysis IEEE Trans. Neural Networks, vol. 10, pp. 626-634, 1999.
[7] Y. Ma, J. Kosecka, and S. Sastry, Optimization Criteria and Geometric Algorithms for Motion and Structure Estimation Int'l J. Computer Vision, vol. 44, no. 3, pp. 219-249, 2001.
[8] A.M. Martinez and A.C. Kak, PCA versus LDA IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp. 228-233, Feb. 2001.
[9] C.P. Robert and G. Casella, Monte Carlo Statistical Methods. Springer, 1999.
[10] T. Sim, S. Baker, and M. Bsat, The CMU Pose, Illumination, and Expression Database IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 25, no. 12, pp. 1615-1618, Dec. 2003.
[11] A. Srivastava, A Bayesian Approach to Geometric Subspace Estimation IEEE Trans. Signal Processing, vol. 48, no. 5, pp. 1390-1400, 2000.
[12] A. Srivastava, U. Grenander, G.R. Jensen, and M.I. Miller, Jump-Diffusion Markov Processes on Orthogonal Groups for Object Recognition J. Statistical Planning and Inference, vol. 103, nos. 1-2, pp. 15-37, 2002.
[13] Q. Zhang, X. Liu, and A. Srivastava, Hierarchical Learning of Optimal Linear Representations Proc. IEEE Workshop Statistical Analysis in Computer Vision, 2003.

Index Terms:
Optimal subspaces, Grassmann manifold, object recognition, linear representations, dimension reduction, optimal component analysis.
Citation:
Xiuwen Liu, Anuj Srivastava, Kyle Gallivan, "Optimal Linear Representations of Images for Object Recognition," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 5, pp. 662-666, May 2004, doi:10.1109/TPAMI.2004.1273986
Usage of this product signifies your acceptance of the Terms of Use.