Pages: pp. 513-516
In addition to its extensive use in forensic sciences, biometrics technology is rapidly being adopted in a wide variety of security applications such as computer and physical access control, electronic commerce, digital rights management, background checking, homeland security, and defense. Security systems demand high accuracy, high throughput, and low cost from their biometric subsystems. Although biometric systems have made great strides, especially over recent years, there is continued need for vigorous research to solve many challenging problems still outstanding. The goal of this special issue has been to document the current state-of-the-art, acknowledge the latest breakthroughs achieved by scientists working in the area of biometric recognition, and identify future promising research areas.
We received a tremendous response to the Call for Papers for this issue. In total, there were 85 submissions, one of the largest ever for a special issue for TPAMI. Of these, 19 papers were accepted, 15 regular and four short. Table 1 shows the percentage distribution of submitted and accepted papers by biometric subarea. The "other" category includes: ear, signature, EEG, gait, hand, dental, and speaker.
With so many submissions, we also required many reviews. Each paper was reviewed by at least three experts and some papers by up to five. We were extremely fortunate to have so many experts in the field who provided detailed, insightful, and timely reviews, leading to the high quality of accepted papers. We also thank TPAMI Editor-in-Chief David Kriegman and Associate-Editor-in-Chief David Fleet, for recognizing that the large and growing number of biometrics submissions to TPAMI in recent years indicates widespread interest in this field and, thus, warrants this special issue. Finally, the IEEE Computer Society office and Elaine Stephenson, who we mainly dealt with, are to be commended and thanked for their helpful and efficient work to shepherd this issue through its many stages toward publication.
We begin the special issue with two papers that extend across all biometric modalities, one on evaluating system performance and one on biometric quality. An important aspect of any biometric authentication system is its performance evaluation. Although most of the evaluation efforts today are based on empirical observational studies carried out on small or medium size databases, it is recognized that statistical modeling tools could play a significant role in order to infer the performance of a system on a very large population. The paper "Statistical Performance Evaluation of Biometric Authentication Systems Using Random Effects Models" by Sinjini Mitra, Marios Savvides, and Anthony Brockwell discusses a new method of extrapolating the performance of a biometric system based on a random effects model together with a Bayesian inference technique. This approach attempts to overcome some limitations of the existing methods, such as strong hypotheses on the data distribution and their statistical independence, sensitivity to the presence of outliers, etc. A very appealing feature of the proposed technique is its possibility of explicitly and independently modeling the sample variations that could cause a performance drop (e.g., the light and the expression changes in a face recognition system).
Estimating the quality of a biometric sample is important to overcome some of the problems that could arise in the practical use of a biometric system; in fact, effective quality-based differential processing techniques, such as sample replacement, algorithm selection, threshold adaptation, etc., could be exploited to improve the overall system performance. In the paper "Performance of Biometric Quality Measures" by Patrick Grother and Elham Tabassi, the term quality is not used to refer to the fidelity of the sample but, instead, to the utility of the sample to an automated system: A good quality sample is characterized by high "matchability." Although a number of approaches have been proposed in the literature to numerically quantify the sample quality for different biometric modalities, few efforts have been devoted to the definition of clear rules for the evaluation of the biometric quality measures. This paper focuses on protocols, measures, presentation, and data collection for conducting quantitative and systematic evaluation of systems that produce a scalar summary of a biometric sample's quality.
Three fingerprint papers follow, dealing with various aspects of security of the template. This concentration on security over and above the basic task of recognition indicates maturity in this field. For a long time, it has been assumed that a minutia-based fingerprint template did not contain sufficient information to allow the reconstruction of the original fingerprint. This belief was recently questioned in some research papers where the reversibility of minutiae templates was demonstrated to different extents. In "From Templates to Images: Reconstructing Fingerprints from Minutiae Points" by Arun Ross, Jidnya Shah, and Anil K. Jain, an effective technique is presented by specifically addressing the reconstruction of the orientation field information, the class or type information, and the friction ridge structure. Although it appears very unlikely to fool a human expert by using a fingerprint image reconstructed from a template, spoofing automated biometric system is possible. This indicates the need to properly protect fingerprint templates. The paper "Generating Cancelable Fingerprint Templates" by Nalini K. Ratha, Sharat Chikkerur, Jonathan H. Connell, and Ruud M. Bolle demonstrates several methods to generate multiple cancelable templates from fingerprint images. In essence, a user can be given as many biometric templates as needed by issuing a new nonsecret transformation "key." Each template consists of the original features (transformed through the key) together with the key itself. Matching the template against a new sample requires mapping the new sample through the key associated to the template and performing the match in the transformed feature space. The templates can be cancelled and replaced if compromised. Because of the key-dependent, one-way transformation used, the features stored can neither be mapped back into the original feature space nor used to reconstruct the original fingerprint image. A key step in the cancelable template construction is precise registration, which requires accurate singular point detection, but, unfortunately, this is quite a critical task for poor quality fingerprints. A very promising technique, which can also be used to reliably detect singular points is proposed in "A Fingerprint Orientation Model Based on 2D Fourier Expansion (FOMFE) and Its Application to Singular-Point Detection and Fingerprint Indexing" by Yi Wang, Jiankun Hu, and Damien Phillips. The paper introduces a fingerprint orientation model based on trigonometric polynomials. The FOMFE approach does not require prior knowledge of singular points, which is a highly desirable feature. It is able to effectively describe the overall ridge topology, including the singular point regions, even for noisy fingerprints. Applications of this model include restoration of poor quality fingerprint areas through contextual filtering, singularity detection, fingerprint classification, and indexing.
There are three iris papers in this special issue, indicating a growing interest in iris recognition. Continuous efforts have been made in searching for effective and robust iris coding methods ever since Daugman's pioneering work on iris recognition was published in this journal more than 10 years ago. The paper "DCT-Based Iris Recognition" by Donald M. Monro, Soumyadip Rakshit, and Dexin Zhang presents a novel iris coding method. The method generates iris codes (i.e., iris feature vectors) based on the zero-crossings of the differences of one-dimensional DCT coefficients computed from local overlapping angular patches of normalized iris images. The use of DCT instead of other well-known transforms is motivated by the DCT's near-optimal decorrelating properties as compared to the Karhunen-Loeve tranform. The second iris paper, "A Bayesian Approach to Deformed Pattern Matching of Iris Images" by Jason Thornton, Marios Savvides, and B.V.K. Vijaya Kumar, addresses a very important problem in iris recognition, the deformation of the iris. The iris region is constantly compressed or expanded due to pupil dilation and contraction. Such deformation, which is usually nonlinear, could have adverse effects on the performance of an iris recognition system if not properly addressed. Instead of handling deformation and iris matching in two separate steps, the authors formulate the two problems as a single MAP estimation process. The process solves the two problems simultaneously by estimating the deformation parameters (representing local translations) and, at the same time, returning a similarity metric which can be used for iris matching. Most existing iris recognition algorithms assume cooperative users so that acquired iris images are of sufficient quality. Under less cooperative and less controlled imaging conditions, captured iris images may be very heterogeneous in terms of focus, contrast, view angle, occlusion, reflection, etc. The third iris paper in this special issue, "Toward Noncooperative Iris Recognition: A Classification Approach Using Multiple Signatures" by Hugo Proença and Luís A. Alexandre, reports a method for iris recognition with heterogeneous images. The method is based on the observation that such heterogeneity often occurs in local regions. Therefore, the segmented and size-normalized iris images are first divided into six local regions, four in the radial direction and two in the angular direction. Each region undergoes independent feature extraction and comparison. The comparison results (dissimilarity values) from these regions are then combined to arrive at a final decision.
Face recognition papers made up the majority of submissions and accepted papers. There are six in this issue. The notorious sensitivity of conventional face recognition systems to changes in illumination motivates the work reported in three of these papers. In "Physiology-Based Face Recognition in the Thermal Infrared Spectrum" by Pradeep Buddharaju, Ioannis T. Pavlidis, Panagiotis Tsiamyrtzis, and Mike Bazakos, the face imaging is confined to the thermal spectrum. This sensing technology is used to capture the innate vascular structure under the skin of the subject as well as its physiology. The minutia of the vascular network serve as features for the subsequent matching process. In "Illumination Invariant Face Recognition Using Near-Infrared Images" by Stan Z. Li, RuFeng Chu, ShengCai Liao, and Lun Zhang, the visible spectrum is avoided by the deployment of active near infrared (NIR) imaging. Any residual effects of the visible spectrum on the imaging process are attenuated by filtering. The inherent resilience of the NIR imaging method to changes in environmental conditions is further enhanced by the use of a Local Binary Pattern representation coupled with a powerful machine learning technique. In "Three-Dimensional Face Recognition in the Presence of Facial Expressions: An Annotated Deformable Model Approach" by Ioannis A. Kakadiaris, Georgios Passalis, George Toderici, Mohammed N. Murtuza, Yunliang Lu, Nikos Karampatziakis, and Theoharis Theoharis, invariance to illumination is achieved by working with the 3D shape of the face, rather than its reflectance properties. The paper extends the previous work of the authors by representing the flexibly registered 3D face not only in terms of 3D geometry, but also using a normal map. The flexible registration is the key to effective handling of changes in facial expression between gallery and probe 3D face images.
Two papers address general methodological issues of the face recognition system design. Although the advocated methods are presented in the context of face recognition, they are relevant in a much broader sense, not only to different face imaging technologies but potentially to other biometric modalities as well. The first paper, "Globally Maximizing, Locally Minimizing: Unsupervised Discriminant Projection with Applications to Face and Palm Biometrics" by Jian Yang, David Zhang, Jing-yu Yang, and Ben Niu, revisits the Linear Discriminant Analysis (LDA). Just as PCA, this simple technique of data representation has a surprising capacity to provide inspiration for the development of its new variants with improved properties. The strategy of simultaneously optimizing the global scatter and minimizing the local scatter adapts the LDA so that it works successfully for data distributed on manifolds, which is typically the case for face data. The second paper is concerned with "Modeling and Predicting Face Recognition System Performance Based on Analysis of Similarity Scores," written by Peng Wang, Qiang Ji, and James L. Wayman. The system performance is modeled by measuring similarity scores for all the pairs of images in the gallery set. The resulting "perfect recognition similarity model," normalized so that it is independent of an adopted similarity measure, can be used to tune the parameters of the face recognition system offline and to predict the system performance for input query images online.
The final face paper is on face detection. For noncooperative scenarios of face recognition, as exemplified, for instance, by face image data captured by CCTV, one faces the challenging task of detecting faces in arbitrary pose. In some approaches, this problem is dealt with by brute force machine learning, where each pose is considered as a separate pattern. The paper "High-Performance Rotation Invariant Multiview Face Detection" by Chang Huang, Haizhou Ai, Yuan Li, and Shihong Lao addresses the problem in an innovative way whereby the learning to respond to faces of different poses is performed simultaneously with the component pose-sensitive units interacting with each other. The approach calls for novel machine learning methods that are developed in the paper, including the Vector Boosting extension of AdaBoost.
One paper deals with multimodal biometrics, "Continuous Verification Using Multimodal Biometrics," by Terence Sim, Sheng Zhang, Rajkumar Janakiraman, and Sandeep Kumar. This challenge of combining biometric modalities to achieve more reliable recognition is one that has been studied increasingly in recent years. This paper employs this approach not just for authentication but for continuous verification to provide stronger security against attackers who may have initially fooled the system. This paper investigates not only multimodal challenges but also performance metrics of false accept and false reject that should be different for continuous verification systems.
Under the "other" category, there are four papers on the topics of: handwriting, ear, and two on brain waves or EEG. One of the oldest biometrics used for forensic purposes (albeit done manually up until recently) is writer identification. The paper "Text Independent Writer Identification and Verification Using Textural and Allographic Features" by Marius Bulacu and Lambert Schomaker describes a text-independent method applicable to cursive and isolated characters. This combines texture and allograph features to reliably identify writers in databases containing on the order of 1,000 writers. The ear biometric is relatively new and not yet widely studied. The paper "Human Ear Recognition in 3D" by Hui Chen and Bir Bhanu gives a good overview of this modality and proposes a method for using color and 3D ear features for recognition. Finally, there are two short papers on a biometric whose horizon is likely further in the future, brain waves or EEG. The paper, "Biometrics from the Brain Electrical Activity: A Machine Learning Approach" by Ramaswamy Palaniappan and Danilo P. Mandic describes how subjects can be identified by their brain electrical response to visual stimuli and the paper "Person Authentication Using Brainwaves (EEG) and Maximum A Posteriori Model Adaptation" by Sébastien Marcel and José del R. Millán recorded EEG data in response to tasks (e.g., "imagine left or right arm moving," "generate words beginning with a given letter"). Both of these papers describe work that has been tested only for small databases to date and lead to yet unanswered questions of practical deployment, but they do indicate that there are other biometric modalities that may be distinctive and applicable to security, but that technology needs further advances to bring into the mainstream biometrics field.
This selection of papers should give readers a good idea of where researchers have been focusing, both on long-studied problems still needing more work and on newer challenges. A fundamental of this field is that there is an ever-increasing need for better recognition and stronger security. But, as public and commercial biometric deployments increase in number, there is also more need to understand privacy issues and to provide greater ease-of-use. The volume and quality of papers in this special issue indicate that much progress has been made in many aspects of the biometrics field and that there are challenging and promising future directions still to follow.