The Community for Technology Leaders
Green Image
ABSTRACT
This paper investigates the recognition of group actions in meetings. A framework is employed in which group actions result from the interactions of the individual participants. The group actions are modeled using different HMM-based approaches, where the observations are provided by a set of audiovisual features monitoring the actions of individuals. Experiments demonstrate the importance of taking interactions into account in modeling the group actions. It is also shown that the visual modality contains useful information, even for predominantly audio-based events, motivating a multimodal approach to meeting analysis.
INDEX TERMS
Statistical models, multimedia applications and numerical signal processing, computer conferencing, asynchronous interaction.
CITATION
Samy Bengio, Iain McCowan, Guillaume Lathoud, Daniel Gatica-Perez, Mark Barnard, Dong Zhang, "Automatic Analysis of Multimodal Group Actions in Meetings", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 27, no. , pp. 305-317, March 2005, doi:10.1109/TPAMI.2005.49
99 ms
(Ver 3.1 (10032016))