The Community for Technology Leaders
Green Image
ABSTRACT
This paper investigates the recognition of group actions in meetings. A framework is employed in which group actions result from the interactions of the individual participants. The group actions are modeled using different HMM-based approaches, where the observations are provided by a set of audiovisual features monitoring the actions of individuals. Experiments demonstrate the importance of taking interactions into account in modeling the group actions. It is also shown that the visual modality contains useful information, even for predominantly audio-based events, motivating a multimodal approach to meeting analysis.
INDEX TERMS
Statistical models, multimedia applications and numerical signal processing, computer conferencing, asynchronous interaction.
CITATION

S. Bengio, I. McCowan, G. Lathoud, D. Gatica-Perez, M. Barnard and D. Zhang, "Automatic Analysis of Multimodal Group Actions in Meetings," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 27, no. , pp. 305-317, 2005.
doi:10.1109/TPAMI.2005.49
93 ms
(Ver 3.3 (11022016))