The Community for Technology Leaders
RSS Icon
Issue No.03 - March (2012 vol.18)
pp: 356-368
S. Hillaire , Orange Labs., INRIA/IRISA, Rennes, France
A. Lecuyer , INRIA, Rennes, France
T. Regia-Corte , INRIA, Rennes, France
R. Cozot , Orange Labs., Cesson-Sevigne, France
J. Royan , Orange Labs., Cesson-Sevigne, France
G. Breton , Orange Labs., Cesson-Sevigne, France
This paper studies the design and application of a novel visual attention model designed to compute user's gaze position automatically, i.e., without using a gaze-tracking system. The model we propose is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute in real time a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive processes which take place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines both bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the user's one. We conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with sometimes more than 100 percent of accuracy gained. This suggests that computing a gaze point in a 3D virtual environment in real time is possible and is a valid approach, compared to object-based approaches. Finally, we expose different applications of our model when exploring virtual environments. We present different algorithms which can improve or adapt the visual feedback of virtual environments based on gaze information. We first propose a level-of-detail approach that heavily relies on multiple-texture sampling. We show that it is possible to use the gaze information of our visual attention model to increase visual quality where the user is looking, while maintaining a high-refresh rate. Second, we introduce the use of the visual attention model in three visual effects inspired by the human visual system namely: depth-of-field blur, camera motions, and dynamic luminance. All these effects are computed based on the simulated gaze of the user, and are meant to improve user's sensations in future virtual reality applications.
solid modelling, mesh generation, real-time systems, multiple-texture sampling, real-time visual attention model, 3D virtual environments, 3D objects, mesh-based representation, continuous gaze point position, level-of-detail approach, Visualization, Computational modeling, Solid modeling, Adaptation models, Humans, Three dimensional displays, Real time systems, visual effects., Visual attention model, first-person exploration, gaze tracking
S. Hillaire, A. Lecuyer, T. Regia-Corte, R. Cozot, J. Royan, G. Breton, "Design and Application of Real-Time Visual Attention Model for the Exploration of 3D Virtual Environments", IEEE Transactions on Visualization & Computer Graphics, vol.18, no. 3, pp. 356-368, March 2012, doi:10.1109/TVCG.2011.154
[1] D.P. Luebke and B. Hallen, “Perceptually-Driven Simplification for Interactive Rendering,” Proc. Eurographics Workshop Rendering Techniques, pp. 223-234, 2001.
[2] S. Hillaire, A. Lécuyer, R. Cozot, and G. Casiez, “Using an Eye-Tracking System to Improve Camera Motions and Depth-of-Field Blur Effects in Virtual Environments,” Proc. IEEE Virtual Reality Conf., pp. 47-50, 2008.
[3] A. Glenstrup and T. Engell-Nielsen, “Eye Controlled Media: Present and Future State,” master's thesis, 1995.
[4] L. Itti, C. Koch, and E. Niebur, “A Model of Saliency-based Visual Attention for Rapid Scene Analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 11, pp. 1254-1259, Nov. 1998.
[5] L. Itti, “Quantifying the Contribution of Low-Level Saliency to Human Eye Movements in Dynamic Scenes,” Visual Cognition, vol. 12, pp. 1093-1123, 2005.
[6] S. Lee, G. Kim, and S. Choi, “Real-Time Tracking of Visually Attended Objects in Virtual Environments and Its Application to LOD,” IEEE Trans. Visualization and Computer Graphics, vol. 15, no. 1, pp. 6-19, Jan./Feb. 2009.
[7] S. Hillaire, A. Lécuyer, G. Breton, and T. Regia-Corte, “Gaze Behavior and Visual Attention Model when Turning in Virtual Environments,” Proc. ACM Symp. Virtual Reality Software and Technology, pp. 43-50, 2009.
[8] K. Cater, A. Chalmers, and G. Ward, “Detail to Attention: Exploiting Visual Tasks for Selective Rendering,” Proc. 14th Eurographics Workshop Rendering, pp. 270-280, 2003.
[9] A.M. Treisman and G. Gelade, “A Feature-Integration Theory of Attention,” Cognitive Psychology, vol. 12, pp. 97-136, 1980.
[10] D. Yarbus, Eye Motion and Vision. Plenum Press, 1967.
[11] P. Longhurst, K. Debattista, and A. Chalmers, “A GPU Based Saliency Map for High-Fidelity Selective Rendering,” Proc. Fourth Int'l Conf. Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa, pp. 21-29, 2006.
[12] V. Navalpakkam and L. Itti, “Modeling the Influence of Task on Attention,” Vision Research, vol. 45, no. 2, pp. 205-231, 2005.
[13] V. Sundstedt, E. Stavrakis, M. Wimmer, and E. Reinhard, “A Psychophysical Study of Fixation Behavior in a Computer Game,” Proc. Fifth Symp. Applied Perception in Graphics and Visualization, pp. 43-50, 2008.
[14] C. Sears and Z. Pylyshyn, “Multiple Object Tracking and Attentional Processing,” J. Experimental Psychology, vol. 54, no. 1, pp. 1-14, 2000.
[15] A.R. Robertson, “Historical Development of CIE Recommended Color Difference Equations,” Color Research and Application, vol. 15, no. 3, pp. 167-170, 1990.
[16] H. Chen and X. Liu, “Lighting and Material of Halo 3,” Proc. SIGGRAPH '08 Classes, 2008.
[17] S. Hillaire, A. Lécuyer, R. Cozot, and G. Casiez, “Depth-of-Field Blur Effects for First-Person Navigation in Virtual Environments,” IEEE Computer Graphics and Applications, vol. 28, no. 6, pp. 47-55, Nov./Dec. 2008.
[18] D.A. Robinson, “The Mechanics of Human Smooth Pursuit Eye Movement,” J. Physiology, vol. 180, pp. 569-591, 1965.
[19] S. Hillaire, G. Breton, N. Ouarti, R. Cozot, and A. Lécuyer, “Using a Visual Attention Model to Improve Gaze Tracking Systems in Interactive 3D Applications,” Computer Graphics Forum, vol. 29, pp. 47-55, 2010.
[20] M. McGuire and E. Enderton, “Colored Stochastic Shadow Maps,” Proc. ACM Symp. Interactive 3D Graphics and Games, 2011.
[21] F. Meinl, “Crytek Sponza Atrium,” downloads, 2011.
[22] L. Terziman, A. Lécuyer, S. Hillaire, and J. Wiener, “Can Camera Motions Improve the Perception of Traveled Distance in Virtual Environments?,” Proc. IEEE Virtual Reality Conf., 2009.
[23] M. Moehring, A. Gloystein, and R. Doerner, “Issues with Virtual Space Perception within Reaching Distance: Mitigating Adverse Effects on Applications Using hmds in the Automotive Industry,” Proc. IEEE Virtual Reality Conf., pp. 223-226, 2009.
[24] E. Reinhard, M. Stark, P. Shirley, and J. Ferwerda, “Photographic Tone Reproduction for Digital Images,” Proc. 29th Ann. Conf. Computer Graphics and Interactive Techniques, pp. 267-276, 2002.
17 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool