This Article 
 Bibliographic References 
 Add to: 
Tensor Sparse Coding for Positive Definite Matrices
March 2014 (vol. 36 no. 3)
pp. 592-605
Ravishankar Sivalingam, University of Minnesota, Twin Cities, Minneapolis
Daniel Boley, University of Minnesota, Twin Cities, Minneapolis
Vassilios Morellas, University of Minnesota, Twin Cities, Minneapolis
Nikolaos Papanikolopoulos, University of Minnesota, Twin Cities, Minneapolis
In recent years, there has been extensive research on sparse representation of vector-valued signals. In the matrix case, the data points are merely vectorized and treated as vectors thereafter (for example, image patches). However, this approach cannot be used for all matrices, as it may destroy the inherent structure of the data. Symmetric positive definite (SPD) matrices constitute one such class of signals, where their implicit structure of positive eigenvalues is lost upon vectorization. This paper proposes a novel sparse coding technique for positive definite matrices, which respects the structure of the Riemannian manifold and preserves the positivity of their eigenvalues, without resorting to vectorization. Synthetic and real-world computer vision experiments with region covariance descriptors demonstrate the need for and the applicability of the new sparse coding model. This work serves to bridge the gap between the sparse modeling paradigm and the space of positive definite matrices.
Index Terms:
Covariance matrices,Sparse matrices,Vectors,Symmetric matrices,Dictionaries,Encoding,Tin,optimization,Sparse coding,positive definite matrices,region covariance descriptors,computer vision
Ravishankar Sivalingam, Daniel Boley, Vassilios Morellas, Nikolaos Papanikolopoulos, "Tensor Sparse Coding for Positive Definite Matrices," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 3, pp. 592-605, March 2014, doi:10.1109/TPAMI.2013.143
Usage of this product signifies your acceptance of the Terms of Use.