This Article 
 Bibliographic References 
 Add to: 
Meticulously Detailed Eye Region Model and Its Application to Analysis of Facial Images
May 2006 (vol. 28 no. 5)
pp. 738-752
We propose a system that is capable of detailed analysis of eye region images in terms of the position of the iris, degree of eyelid opening, and the shape, complexity, and texture of the eyelids. The system uses a generative eye region model that parameterizes the fine structure and motion of an eye. The structure parameters represent structural individuality of the eye, including the size and color of the iris, the width, boldness, and complexity of the eyelids, the width of the bulge below the eye, and the width of the illumination reflection on the bulge. The motion parameters represent movement of the eye, including the up-down position of the upper and lower eyelids and the 2D position of the iris. The system first registers the eye model to the input in a particular frame and individualizes it by adjusting the structure parameters. The system then tracks motion of the eye by estimating the motion parameters across the entire image sequence. Combined with image stabilization to compensate for appearance changes due to head motion, the system achieves accurate registration and motion recovery of eyes.

[1] A. Kapoor, Y. Qi, and R.W. Picard, “Fully Automatic Upper Facial Action Recognition,” Proc. IEEE Int'l Workshop Analysis and Modeling of Faces and Gestures, pp. 195-202, Oct. 2003.
[2] T. Moriyama, T. Kanade, J.F. Cohn, J. Xiao, Z. Ambadar, J. Gao, and H. Imamura, “Automatic Recognition of Eye Blinking in Spontaneously Occurring Behavior,” Proc. IEEE Int'l Conf. Pattern Recognition, pp. 78-81, Aug. 2002.
[3] Y. Tian, T. Kanade, and J.F. Cohn, “Recognizing Action Units for Facial Expression Analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp. 97-115, Feb. 2001.
[4] Y. Matsumoto, T. Ogasawara, and A. Zelinsky, “Behavior Recognition Based on Head Pose and Gaze Direction Measurement,” Proc. IEEE/RSJ Int'l Conf. Intelligent Robots and Systems, pp. 2127-2132, 2000.
[5] J. Zhu and J. Yang, “Subpixel Eye Gaze Tracking,” Proc. IEEE Int'l Conf. Automatic Face and Gesture Recognition, pp. 131-136, May 2002.
[6] J.G. Wang and E. Sung, “Study on Eye Gaze Estimation,” IEEE Trans. Systems, Man, and Cybernetics, Part B, vol. 32, no. 3, pp. 332-350, 2002.
[7] K. Fukuda, “Eye Blinks: New Indices for the Detection of Deception,” Psychophysiology, vol. 40, no. 3, pp. 239-245, 2001.
[8] R. Gross, J. Shi, and J. Cohn, “Quo Vadis Face Recognition?” Proc. Third Workshop Empirical Evaluation Methods in Computer Vision, Dec. 2001.
[9] Facial Action Coding System. P. Ekman et al., eds., Research Nexus, Network Research Information, Salt Lake City, Utah, 2002.
[10] P. Ekman and E. Rosenberg, What the Face Reveals, second ed. New York: Oxford Univ. Press, 1994.
[11] S.B. Gokturk, J.Y. Bouguet, C. Tomasi, and B. Girod, “Model-Based Face Tracking for View-Independent Facial Expression Recognition,” Proc. IEEE Face and Gesture Conf., pp. 272-278, 2002.
[12] M. Pantic and L.J.M. Rothkrantz, “Automatic Analysis of Facial Expression: The State of the Art,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 12, pp. 1424-1445, Dec. 2000.
[13] I. Ravyse, H. Sahli, and J. Cornelis, “Eye Activity Detection and Recognition Using Morphological Scale-Space Decomposition,” Proc. IEEE Int'l Conf. Pattern Recognition, vol. 1, pp. 5080-5083, 2000.
[14] S.H. Choi, K.S. Park, M.W. Sung, and K.H. Kim, “Dynamic and Quantitative Evaluation of Eyelid Motion Using Image Analysis,” Medical and Biological Eng. and Computing, vol. 41, no. 2, pp. 146-150, 2003.
[15] R. Herpers, M. Michaelis, K.H. Lichtenauer, and G. Sommer, “Edge and Keypoint Detection in Facial Regions,” Proc. IEEE Face and Gesture Conf., pp. 212-217, 1996.
[16] H. Chen, Y.Q. Yu, H.Y. Shum, S.C. Zhu, and N.N. Zheng, “Example Based Facial Sketch Generation with Non-Parametric Sampling,” Proc. IEEE Int'l Conf. Computer Vision, vol. 2, pp. 433-438, 2001.
[17] S.P. Lee, J.B. Badler, and N.I. Badler, “Eyes Alive,” Proc. Int'l Conf. Computer Graphics and Interactive Techniques, pp. 637-644, 2002.
[18] X. Xie, R. Sudhakar, and H. Zhuang, “On Improving Eye Feature Extraction Using Deformable Templates,” Pattern Recognition, vol. 27, no. 6, pp. 791-799, June 1994.
[19] J. Deng and F. Lai, “Region-Based Template Deformable and Masking for Eye-Feature Extraction and Description,” Pattern Recognition, vol. 30, no. 3, pp. 403-419, Mar. 1997.
[20] G. Chow and X. Li, “Towards a System for Automatic Facial Feature Detection,” Pattern Recognition, vol. 26, no. 12, pp. 1739-1755, Dec. 1993.
[21] A. Yuille, D. Cohen, and P. Hallinan, “Feature Extraction from Faces Using Deformable Templates,” Int'l J. Computer Vision, vol. 8, no. 2, pp. 99-111, Aug. 1992.
[22] Y. Tian, T. Kanade, and J.F. Cohn, “Eye-State Detection by Local Regional Information,” Proc. Int'l Conf. Multimodal User Interface, pp. 143-150, Oct. 2000.
[23] L. Sirovich and M. Kirby, “Low-Dimensional Procedure for the Characterization of Human Faces,” J. Optical Soc. of Am., vol. 4, pp. 519-524, 1987.
[24] M.A. Turk and A.P. Pentland, “Face Recognition Using Eigenfaces,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 586-591, 1991.
[25] I. King and L. Xu, “Localized Principal Component Analysis Learning for Face Feature Extraction,” Proc. Workshop 3D Computer Vision, pp. 124-128, 1997.
[26] J. Xiao, T. Moriyama, T. Kanade, and J.F. Cohn, “Robust Full-Motion Recovery of Head by Dynamic Templates and Re-Registration Techniques,” Int'l J. Imaging Systems and Technology, vol. 13, pp. 85-94, Sept. 2003.
[27] B.D. Lucas and T. Kanade, “An Iterative Image Registration Technique with an Application to Stereo Vision,” Proc. Int'l Joint Conf. Artificial Intelligence, pp. 674-679, 1981.
[28] T. Kanade, J.F. Cohn, and Y. Tian, “Comprehensive Database for Facial Expression Analysis,” Proc. IEEE Face and Gesture Conf., pp. 46-53, 2000.
[29] P. Ekman, J. Hagar, C.H. Methvin, and W. Irwin , “Ekman-Hagar Facial Action Exemplars,” Human Interaction Laboratory, Univ. of California, San Francisco: unpublished data.
[30] M. Pantic and L.J.M. Rothkrantz, “Expert System for Automatic Analysis of Facial Expression,” Image and Vision Computing, vol. 18, no. 11, pp. 881-905, Aug. 2000.
[31] J.J. Lien, T. Kanade, J.F. Cohn, and C. Li, “Detection, Tracking, and Classification of Subtle Changes in Facial Expression,” J. Robotics and Autonomous Systems, vol. 31, pp. 131-146, 2000.
[32] Active Vision. A. Blake and A. Yuille eds., chapter 2, pp. 21-38, MIT Press, 1992.

Index Terms:
Computer vision, facial image analysis, facial expression analysis, generative eye region model, motion tracking, texture modeling, gradient descent.
Tsuyoshi Moriyama, Takeo Kanade, Jing Xiao, Jeffrey F. Cohn, "Meticulously Detailed Eye Region Model and Its Application to Analysis of Facial Images," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 5, pp. 738-752, May 2006, doi:10.1109/TPAMI.2006.98
Usage of this product signifies your acceptance of the Terms of Use.