• Publication
  • 2013
  • Issue No. 5 - May
  • Abstract - Multiscale Local Phase Quantization for Robust Component-Based Face Recognition Using Kernel Fusion of Multiple Descriptors
 This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Multiscale Local Phase Quantization for Robust Component-Based Face Recognition Using Kernel Fusion of Multiple Descriptors
May 2013 (vol. 35 no. 5)
pp. 1164-1177
Chi Ho Chan, Center for Vision, Speech & Signal Process., Univ. of Surrey, Guildford, UK
M. A. Tahir, Center for Vision, Speech & Signal Process., Univ. of Surrey, Guildford, UK
J. Kittler, Center for Vision, Speech & Signal Process., Univ. of Surrey, Guildford, UK
M. Pietikäinen, Dept. of Electr. & Inf. Eng., Univ. of Oulu, Oulu, Finland
Face recognition subject to uncontrolled illumination and blur is challenging. Interestingly, image degradation caused by blurring, often present in real-world imagery, has mostly been overlooked by the face recognition community. Such degradation corrupts face information and affects image alignment, which together negatively impact recognition accuracy. We propose a number of countermeasures designed to achieve system robustness to blurring. First, we propose a novel blur-robust face image descriptor based on Local Phase Quantization (LPQ) and extend it to a multiscale framework (MLPQ) to increase its effectiveness. To maximize the insensitivity to misalignment, the MLPQ descriptor is computed regionally by adopting a component-based framework. Second, the regional features are combined using kernel fusion. Third, the proposed MLPQ representation is combined with the Multiscale Local Binary Pattern (MLBP) descriptor using kernel fusion to increase insensitivity to illumination. Kernel Discriminant Analysis (KDA) of the combined features extracts discriminative information for face recognition. Last, two geometric normalizations are used to generate and combine multiple scores from different face image scales to further enhance the accuracy. The proposed approach has been comprehensively evaluated using the combined Yale and Extended Yale database B (degraded by artificially induced linear motion blur) as well as the FERET, FRGC 2.0, and LFW databases. The combined system is comparable to state-of-the-art approaches using similar system configurations. The reported work provides a new insight into the merits of various face representation and fusion methods, as well as their role in dealing with variable lighting and blur degradation.
Index Terms:
visual databases,face recognition,feature extraction,image fusion,image representation,lighting,face representation,multiscale local phase quantization,robust component-based face recognition,kernel fusion,uncontrolled illumination,image degradation,image blurring,face information,image alignment,blur-robust face image descriptor,multiscale framework,misalignment insensitivity maximization,MLPQ descriptor,component-based framework,regional features,MLPQ representation,multiscale local binary pattern descriptor,MLBP descriptor,kernel discriminant analysis,KDA,illumination insensitivity,discriminative information extraction,geometric normalizations,Extended Yale database B,Yale database,FERET databases,FRGC 2.0 databases,LFW databases,Face,Kernel,Face recognition,Histograms,Vectors,Lighting,Databases,kernel fusion,Face recognition,face image representation,local binary pattern,local phase quantization,kernel discriminant analysis
Citation:
Chi Ho Chan, M. A. Tahir, J. Kittler, M. Pietikäinen, "Multiscale Local Phase Quantization for Robust Component-Based Face Recognition Using Kernel Fusion of Multiple Descriptors," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 5, pp. 1164-1177, May 2013, doi:10.1109/TPAMI.2012.199
Usage of this product signifies your acceptance of the Terms of Use.