The Community for Technology Leaders
Green Image
Issue No. 01 - January (2011 vol. 33)
ISSN: 0162-8828
pp: 101-116
Jean-Marc Odobez , Idiap Research Institute, Martigny
Sileye O. Ba , LabSTICC, Ecole Nationale des Télécommunications de Bretagne, Technopole Brest-Iroise
ABSTRACT
This paper introduces a novel contextual model for the recognition of people's visual focus of attention (VFOA) in meetings from audio-visual perceptual cues. More specifically, instead of independently recognizing the VFOA of each meeting participant from his own head pose, we propose to jointly recognize the participants' visual attention in order to introduce context-dependent interaction models that relate to group activity and the social dynamics of communication. Meeting contextual information is represented by the location of people, conversational events identifying floor holding patterns, and a presentation activity variable. By modeling the interactions between the different contexts and their combined and sometimes contradictory impact on the gazing behavior, our model allows us to handle VFOA recognition in difficult task-based meetings involving artifacts, presentations, and moving people. We validated our model through rigorous evaluation on a publicly available and challenging data set of 12 real meetings (5 hours of data). The results demonstrated that the integration of the presentation and conversation dynamical context using our model can lead to significant performance improvements.
INDEX TERMS
Visual focus of attention, conversational events, multimodal, contextual cues, dynamic Bayesian network, head pose, meeting analysis.
CITATION
Jean-Marc Odobez, Sileye O. Ba, "Multiperson Visual Focus of Attention from Head Pose and Meeting Contextual Cues", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 33, no. , pp. 101-116, January 2011, doi:10.1109/TPAMI.2010.69
115 ms
(Ver )