The Community for Technology Leaders
RSS Icon
Subscribe
Anchorage, AK, USA
June 23, 2008 to June 28, 2008
ISBN: 978-1-4244-2339-2
pp: 1-6
Akira Utsumi , ATR Intelligent Robotics and Communication Laboratories, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-0288, Japan
Hirotake Yamazoe , ATR Intelligent Robotics and Communication Laboratories, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-0288, Japan
Shinji Abe , ATR Intelligent Robotics and Communication Laboratories, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-0288, Japan
ABSTRACT
We propose a gaze estimation method that substantially relaxes the practical constraints possessed by most conventional methods. Gaze estimation research has a long history, and many systems including some commercial schemes have been proposed. However, the application domain of gaze estimation is still limited (e.g, measurement devices for HCI issues, input devices for VDT works) due to the limitations of such systems. First, users must be close to the system (or must wear it) since most systems employ IR illumination and/or stereo cameras. Second, users are required to perform manual calibrations to get geometrically meaningful data. These limitations prevent applications of the system that capture and utilize useful human gaze information in daily situations. In our method, inspired by a bundled adjustment framework, the parameters of the 3D head-eye model are robustly estimated by minimizing pixel-wise re-projection errors between single-camera input images and eye model projections for multiple frames with adjacently estimated head poses. Since this process runs automatically, users does not need to be aware of it. Using the estimated parameters, 3D head poses and gaze directions for newly observed images can be directly determined with the same error minimization manner. This mechanism enables robust gaze estimation with single-camera-based low resolution images without user-aware preparation tasks (i.e., calibration). Experimental results show the proposed method achieves 6° accuracy with QVGA (320 × 240) images. The proposed algorithm is free from observation distances. We confirmed that our system works with longdistance observations (10 meters).
CITATION
Akira Utsumi, Hirotake Yamazoe, Shinji Abe, "Remote and head-motion-free gaze tracking for real environments with automated head-eye model calibrations", CVPRW, 2008, 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops 2008, pp. 1-6, doi:10.1109/CVPRW.2008.4563184
24 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool