Issue No. 03 - July-Sept. (2014 vol. 5)
Matthieu Courgeon , Lab-STICC, Université de Bretagne-Sud, 29238 Brest Cedex 3, France
Gilles Rautureau , Emotion Center, CNRS USR 3246, Hôpital de La Salpêtrière, 75651 Paris Cedex 13, France
Jean-Claude Martin , LIMSI-CNRS, Université Paris-Sud, 91403 Orsay Cedex, France
Ouriel Grynszpan , Emotion Center, CNRS USR 3246, Université Pierre et Marie Curie, Hôpital de La Saplêtrière, 75651 Paris Cedex 13, France
This article analyses the issues pertaining to the simulation of joint attention with virtual humans. Gaze represents a powerful communication channel illustrated by the pivotal role of joint attention in social interactions. To our knowledge, there have been only few attempts to simulate gazing patterns associated with joint attention as a mean for developing empathic virtual agents. Eye-tracking technologies now enable creating non-invasive gaze-contingent systems that empower the user with the ability to lead a virtual human's focus of attention in real-time. Although gaze control can be deliberate, most of our visual behaviors in everyday life are not. This article reports empirical data suggesting that users only have partial awareness of controlling gaze-contingent displays. The technical challenges induced by detecting the user's focus of attention in virtual reality are reviewed and several solutions are compared. We designed and tested a platform for creating virtual humans endowed with the ability to follow the user's attention. The article discusses the advantages of simulating joint attention for improving interpersonal skills and user engagement. Joint attention plays a major role in the development of autism. The platform we designed is intended for research and treatment of autism and tests included participants with this disorder.
Joints, Visualization, Variable speed drives, Context, Shape, Autism, Real-time systems
M. Courgeon, G. Rautureau, J. Martin and O. Grynszpan, "Joint Attention Simulation Using Eye-Tracking and Virtual Humans," in IEEE Transactions on Affective Computing, vol. 5, no. 3, pp. 238-250, 2014.