This article analyses the issues pertaining to the simulation of joint attention with virtual humans. Gaze represents a powerful communication channel illustrated by the pivotal role of joint attention in social interactions. To our knowledge, there have been only few attempts to simulate gazing patterns associated with joint attention as a mean for developing empathic virtual agents. Eye-tracking technologies now enable creating non-invasive gaze-contingent systems that empower the user with the ability to lead a virtual human’s focus of attention in real-time. Although gaze control can be deliberate, most of our visual behaviors in everyday life are not. This article reports empirical data suggesting that users only have partial awareness of controlling gaze-contingent displays. The technical challenges induced by detecting the user’s focus of attention in virtual reality are reviewed and several solutions are compared. We designed and tested a platform for creating virtual humans endowed with the ability to follow the user’s attention. The article discusses the advantages of simulating joint attention for improving interpersonal skills and user engagement. Joint attention plays a major role in the development of autism. The platform we designed is intended for research and treatment of autism and tests included participants with this disorder.
Ouriel Grynszpan, "Joint Attention Simulation using Eye-Tracking and Virtual Humans", IEEE Transactions on Affective Computing, , no. 1, pp. 1, PrePrints PrePrints, doi:10.1109/TAFFC.2014.2335740