This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
A Video Database of Moving Faces and People
May 2005 (vol. 27 no. 5)
pp. 812-816
We describe a database of static images and video clips of human faces and people that is useful for testing algorithms for face and person recognition, head/eye tracking, and computer graphics modeling of natural human motions. For each person there are nine static "facial mug shots” and a series of video streams. The videos include a "moving facial mug shot,” a facial speech clip, one or more dynamic facial expression clips, two gait videos, and a conversation video taken at a moderate distance from the camera. Complete data sets are available for 284 subjects and duplicate data sets, taken subsequent to the original set, are available for 229 subjects.

[1] P.J. Phillips, H Moon, S. Rizvi, and P. Rauss, “The Feret Evaluation Methodology for Face Recognition Algorithms,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, pp. 1090-1103, 2000.
[2] W. Zhao, R. Chellappa, A. Rosenfeld, and P.J. Phillips, “Face Recognition: A Literature Review,” Technical Report CAS-TR-948, Univ. of Maryland, College Park, Oct. 2000.
[3] J.F. Cohn, A.J. Zlochower, J. Lien, and T. Kanade, “Automated Face Analysis by Feature Point Tracking has High Concurrent Validity with Manual FACS Coding,” Psychophysiology, vol. 36, pp. 35-43, 1999.
[4] S. Dubuisson, F. Davoine, and M.A. Masson, “Solution for Facial Expression Representation and Recognition,” Signal Processing-Image Comm., vol. 17, pp. 657-673, 2002.
[5] B. Fasel and J. Luettin, “Automatic Facial Expression Analysis: A Survey,” Pattern Recognition, vol. 36, pp. 259-275, 2003.
[6] J.J. Lien, T. Kanade, J.F. Cohn, and L. Ching-Chung, “Subtly Different Facial Expression Recognition and Expression Intensity Estimation,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 853-859, June 1998.
[7] J.J. Lien, T. Kanade, J.F. Cohn, and L. Ching-Chung, “Automated Facial Expression Recognition Based on FACS Action Units,” Proc. IEEE Int'l Conf. Automatic Face and Gesture Recognition, pp. 390-395, Apr. 1998.
[8] Y. Tian, T. Kanade, and J.F. Cohn, “Recognizing Action Units for Facial Expression Analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp. 97-115, Feb. 2001.
[9] A. Lanitis, C.J. Taylor, and T.F. Cootes, “Toward Automatic Simulation of Aging Effects on Face Images,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, pp. 442-455, 2002.
[10] R.T. Collins, A.J. Lipton, and T. Kanade, “Introduction to the Special Section on Video Surveillance,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 745-746, Aug. 2000.
[11] L.A. Wang, W.M. Hu, and T.N. Tan, “Recent Developments in Human Motion Analysis,” Pattern Recognition, vol. 36, pp. 585-601, 2003.
[12] P.J. Phillips, P. Grother, R. Micheals, D.M. Blackburn, E. Tabassi, and J.M. Bone, “Face Recognition Vendor Test 2002: Evaluation Report,” NISTIR 6965, www.frvt.org, 2003.
[13] T. Kanade, J. Cohn, and Y.-L. Tian, “Comprehensive Database for Facial Expresison Analysis,” Proc. Fourth IEEE Int'l Conf. Automatic Face and Gesture Recognition, 2000.
[14] T. Sim, S. Baker, and M. Bsat, “The CMU Pose, Illumination, and Expression Database,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 25, pp. 1615-1618, 2003.
[15] V. Blanz and T. Vetter, “A Morphable Model for the Synthesis of 3D Faces,” Proc. SIGGRAPH '99, 1999.
[16] Q. Chen and G. Medioni, “Building 3-D Human Face Models from Two Photographs,” J. Visual Signal Processing Systems for Signal Image and Video Technology, vol. 27, pp. 127-140, 2001.
[17] T.F. Cootes, G.F. Edwards, and C.J. Taylor, “Active Appearance Models,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, pp. 681-685, 2001.
[18] P.Y. Hong, Z. Wen, and T.S. Huang, “Real-Time Speech-Driven Face Animation with Expressions Using Neural Networks,” IEEE Trans. Neural Networks, vol. 13, pp. 916-927, 2002.
[19] F. Pighin, R. Szeliski, and D.H. Salesin, “Modeling and Animating Realistic Faces from Images,” Int'l J. Computer Vision, vol. 50, no. 2, pp. 143-169, 2002.
[20] K.K. Sung and T. Poggio, “Example-Based Learning for View-Based Human Face Detection,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, pp. 39-51, 1998.
[21] C.Z. Zhang and F.S. Cohen, “3-D Face Structure Extraction and Recognition from Images Using 3-D Morphing and Distance Mapping,” IEEE Trans. Image Processing, vol. 11, pp. 1249-1259, 2002.
[22] S.M. Snow, G.J. Lannen, A.J. O'Toole, and H. Abdi, “Memory for Moving Faces: Effects of Rigid and Non-Rigid Motion,” J. Vision, vol. 2, no. 7, p. 600a, 2002.
[23] P.J. Ekman and W.V. Friesen, The Facial Action Coding System: A Technique for the Measurement of Facial Movement. San Francisco: Consulting Psychology Press, 1978.
[24] M. Pantic and L.J.M. Rothkrantz, “Automatic Analysis of Facial Expressions: The State of the Art,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, pp. 1424-1445, 2000.

Index Terms:
Face database, face recognition, face tracking, digital video.
Citation:
Alice J. O'Toole, Joshua Harms, Sarah L. Snow, Dawn R. Hurst, Matthew R. Pappas, Janet H. Ayyad, Herv? Abdi, "A Video Database of Moving Faces and People," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 5, pp. 812-816, May 2005, doi:10.1109/TPAMI.2005.90
Usage of this product signifies your acceptance of the Terms of Use.