The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.04 - April (2013 vol.19)
pp: 681-690
D. J. Roberts , Univ. of Salford, Salford, UK
J. Rae , Univ. of Roehampton, Roehampton, UK
T. W. Duckworth , Univ. of Salford, Salford, UK
C. M. Moore , Univ. of Salford, Salford, UK
R. Aspin , Univ. of Salford, Salford, UK
ABSTRACT
The aim of our experiment is to determine if eye-gaze can be estimated from a virtuality human: to within the accuracies that underpin social interaction; and reliably across gaze poses and camera arrangements likely in every day settings. The scene is set by explaining why Immersive Virtuality Telepresence has the potential to meet the grand challenge of faithfully communicating both the appearance and the focus of attention of a remote human participant within a shared 3D computer-supported context. Within the experiment n=22 participants rotated static 3D virtuality humans, reconstructed from surround images, until they felt most looked at. The dependent variable was absolute angular error, which was compared to that underpinning social gaze behaviour in the natural world. Independent variables were 1) relative orientations of eye, head and body of captured subject; and 2) subset of cameras used to texture the form. Analysis looked for statistical and practical significance and qualitative corroborating evidence. The analysed results tell us much about the importance and detail of the relationship between gaze pose, method of video based reconstruction, and camera arrangement. They tell us that virtuality can reproduce gaze to an accuracy useful in social interaction, but with the adopted method of Video Based Reconstruction, this is highly dependent on combination of gaze pose and camera arrangement. This suggests changes in the VBR approach in order to allow more flexible camera arrangements. The work is of interest to those wanting to support expressive meetings that are both socially and spatially situated, and particular those using or building Immersive Virtuality Telepresence to accomplish this. It is also of relevance to the use of virtuality humans in applications ranging from the study of human interactions to gaming and the crossing of the stage line in films and TV.
INDEX TERMS
Cameras, Estimation, Accuracy, Image reconstruction, Face, Visualization,hierarchical finite state machines., Cinematography, virtual worlds, virtual environments, camera placement
CITATION
D. J. Roberts, J. Rae, T. W. Duckworth, C. M. Moore, R. Aspin, "Estimating the Gaze of a Virtuality Human", IEEE Transactions on Visualization & Computer Graphics, vol.19, no. 4, pp. 681-690, April 2013, doi:10.1109/TVCG.2013.30
REFERENCES
[1] S.M. Anstis, J. W. Mayhew, and T. Morley, 1969, The Perception of Where a Face or Television ‘Portrait’ Is Looking. The American Journal of Psychology, 82: 474-489. 1969.
[2] M. Argyle, and M. Cook, 1976, , Gaze and mutual gaze. Cambridge: Cambridge University Press. ISBN 10: 0521208653 / 0–521-20865–3.
[3] M. Argyle and J. Graham, 1977, The Central Europe Experiment —, looking at persons and looking at things. Journal of Environmental Psychology and Nonverbal Behaviour. 1. pp. 6-16.
[4] Allard,J., Franco, J., Menier,C., Boyer,E., Raffin, B., The GrImage Platform: A Mixed Reality Environment for Interactions. IEEE International Conference on Computer Vision Systems, 2006 ICVS'06, pp 46-46 (2006)
[5] T. Duckworth and D.J. Roberts, 2013, Parallel processing for real-time 3D reconstruction from video streams, Springer-Verlag Journal of Real-Time Image Processing, 10.1007/s11554–012-0306–1.
[6] M. Eisemann, B. De Decker,M. Magnor, P. Bekaert,E. de Aguiar, and N. Ahmed, C. Theobalt, A. Sellent, 2008, Floating Textures, Computer Graphics Forum 27(2), 2008, pp409-418. DOI: 10.1111/j.1467–8659.2008.01138.x
[7] J. Franco, and E. Boyer, 2009, Efficient polyhedral modeling from silhouettes., Pattern Analysis and Machine Intelligence, IEEE Transactions on, 13(3), 414-427, 2009. DOI:10.1109/TPAMI.2008.104
[8] C. Goodwin, 2000, Action and Embodiment Within Situated Human Interaction, Journal of Pragmatics, 32, pp. 1489-522.
[9] O. Grau, A. Hilton, J. Kilner, G. Miller, T. Sargeant, and J. Starck, 2007, A free-viewpoint video system for visualisation of sport scenes, SMPTE Motion Imagining Journal. May1, 2007, vol 116, no5-6, pp. 213–219. DOI:10.5594/J11445
[10] M. Gross,S. Würmlin,M. Naef, E. Lamboray, C. Spagno, A. Kunz, E. Koller-Meier, T. Svoboda, L. Van Gool,S. Lang, K. Strehlke, A. V. Moere, and O. Staadt, 2003, , Blue-c: a spatially immersive display and 3D video portal for telepresence, ACM Transactions on Graphics (TOG), Vol. 22, Issue 3, pp. 819-827. DOI:10.1145/882262.882350
[11] E. Hall, (1966). The Hidden Dimension. Anchor Books. ISBN 0–385-08476–5.
[12] K. Kim, J. Bolton, A. Girouard, J. Cooperstock, and R. Vertegaal,Telehuman: effects of 3d perspective on gaze and pose estimation with a life-size cylindrical telepresence pod, in Proc. of the 2012 ACM annual conference on Human Factors in Computing Systems, New York, NY, USA, 2012, CHI '12, pp. 2531-2540, ACM. DOI:10.1145/2207676.2208640
[13] N.L. Kluttz, B. R. Mayes, R. W. West, and D. S. Kerby, 2009, The effect of head turn on the perception of gaze, Vision Research, volume 49, Issue 15, July22 2009, pp. 1979-1993, ISSN 0042–6989, DOI:10.1016/j.visres. 2009. 05.013.
[14] S. R. H. Langton,, The mutual influence of gaze and head orientation in the analysis of social attention direction. Quarterly Journal of Experimental Psychology, 2000, 53: 825-845. DOI:10.1080/713755908.
[15] P. Lincoln, G. Welch, A. Nashel, A. State, A. Ilie, and H. Fuchs, 2011. “Animatronic shader lamps avatars,” Virtual Real., vol. 15, no2-3, pp. 225–238. June 2011. DOI: 10.1007/s10055–010-0175–5
[16] S. Al Moubayed, J. Edlund, and J., Beskow, 2012 “, Taming mona lisa: Communicating gaze faithfully in 2d and 3d facial projections,” ACM Trans. Interact. Intell. Syst., vol. 1, no. 2, pp. 11: 1-11: 25, Jan. 2012. DOI:10.1145/2070719.2070724.
[17] N. Negroponte, Being Digital. Alfred A. Knopf, Inc., New York, ISBN-10: 0679762906 I ISBN-13: 978-0679762904, NY, USA, 1995.
[18] D. Nguyen, and J. Canny, 2005, MultiView: Spatially Faithful Group Video Conferencing, in Proc. CHI 2005, ACM Press, pp. 799-808. DOI:l0.1145/1054972.1055084
[19] E. Oyarskaya, and H. Hecht, 2009, The Mona Lisa Effect: Is it confined to the horizontal plane?. In Perception 38 ECVP, 2009. p31.
[20] R. Raskar, G. Welch, M. Cutts, A. Lake, L. Stesin, & H. Fuchs,, The Office of the Future: A Unified Approach to Image-Based Modeling and Spatially Immersive Displays, in Proc. 25th conf. Computer graphics and interactive techniques, SIGGRAPH'98, pp. 179-188. DOI:10.1145/280814.280861
[21] Rae, J P & Roberts, D J 2011, Some Implications of Eye Gaze Behavior and Perception for the Design of Immersive Telecommunication Systems, in: 'IEEE/ACM Proceedings of 15th Int. Symp. On Distributed Simulation and Real Time Applications', IEEE, Salford, UK, pp.120-125.
[22] D. Roberts, R. Wolff, J. Rae, A. Steed, R. Aspin, M. McIntyre, A. Pena, O. Oyekoya, and W. Steptoe,Communicating Eye-gaze Across a Distance: Comparing an Eye-gaze enabled Immersive Collaborative Virtual Environment, Aligned Video Conferencing, and Being Together, in IEEE Virtual Reality 2009. pp. 135-142
[23] R. Vertegaal,G van der Veer,H. Vons, 2000, Effects of Gaze on Multiparty Mediated Communication, In: S. Fels, P. Poulin (eds) Proceedings of Graphics Interface 2000, (Montreal, Canada, 15-17 May 2000). Morgan Kaufmann Publishers. ISBN 0–9695338-9–6. pp. 95–102.
[24] R. Vertegaal, I. Weevers, C. Sohn, and C. Cheung., 2003, GAZE-2: conveying eye contact in group video conferencing using eye-controlled camera direction, in Proc. of CHI'03 SIGCHI Conference on Human factors in computer systems, ACM Press, New York, pp. 521-528. DOI: 10.1145/642611.642702
[25] W.H. Wollaston, 1824, On the Apparent Direction of Eyes in a Portrait. Philosophical Transactions of the Royal Society of London, 114: 247-256.
184 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool