Issue No. 02 - Feb. (2013 vol. 35)
Y. Sugano , Sato Lab., Univ. of Tokyo, Tokyo, Japan
Y. Matsushita , Microsoft Res. Asia, Beijing, China
Y. Sato , Sato Lab., Univ. of Tokyo, Tokyo, Japan
We propose a gaze sensing method using visual saliency maps that does not need explicit personal calibration. Our goal is to create a gaze estimator using only the eye images captured from a person watching a video clip. Our method treats the saliency maps of the video frames as the probability distributions of the gaze points. We aggregate the saliency maps based on the similarity in eye images to efficiently identify the gaze points from the saliency maps. We establish a mapping between the eye images to the gaze points by using Gaussian process regression. In addition, we use a feedback loop from the gaze estimator to refine the gaze probability maps to improve the accuracy of the gaze estimation. The experimental results show that the proposed method works well with different people and video clips and achieves a 3.5-degree accuracy, which is sufficient for estimating a user's attention on a display.
Visualization, Estimation, Calibration, Feature extraction, Accuracy, Face, Humans
Y. Sugano, Y. Matsushita, Y. Sato, "Appearance-Based Gaze Estimation Using Visual Saliency", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 35, no. , pp. 329-341, Feb. 2013, doi:10.1109/TPAMI.2012.101