This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Joint Sparse Representation for Robust Multimodal Biometrics Recognition
Jan. 2014 (vol. 36 no. 1)
pp. 113-126
Sumit Shekhar, University of Maryland, College Park
Vishal M. Patel, University Of Maryland, College Park
Nasser M. Nasrabadi, U.S. Army Research Lab, Adelphi
Rama Chellappa, University of Maryland, College Park
Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a multimodal sparse representation method, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information among biometric modalities. A multimodal quality measure is also proposed to weigh each modality as it gets fused. Furthermore, we also kernelize the algorithm to handle nonlinearity in data. The optimization problem is solved using an efficient alternative direction method. Various experiments show that the proposed method compares favorably with competing fusion-based methods.
Index Terms:
sparse representation,Multimodal biometrics,feature fusion
Citation:
Sumit Shekhar, Vishal M. Patel, Nasser M. Nasrabadi, Rama Chellappa, "Joint Sparse Representation for Robust Multimodal Biometrics Recognition," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 1, pp. 113-126, Jan. 2014, doi:10.1109/TPAMI.2013.109
Usage of this product signifies your acceptance of the Terms of Use.