The Community for Technology Leaders
RSS Icon
Subscribe
Saint Maarten, Netherlands, Antilles
Feb. 10, 2010 to Feb. 16, 2010
ISBN: 978-0-7695-3957-7
pp: 50-55
ABSTRACT
Every day human communication relies on a large number of different communication mechanisms like spoken language, facial expressions, body pose and gestures, allowing humans to pass large amounts of information in short time. In contrast, traditional human-machine communication is often unintuitive and requires specifically trained personal. In this paper, we present a real-time capable framework that recognizes traditional visual human communication signals in order to establish a more intuitive human-machine interaction. Humans rely on the interaction partner’s face for identification, which helps them to adapt to the interaction partner and utilize context information. Head gestures (head nodding and head shaking) are a convenient way to show agreement or disagreement. Facial expressions give evidence about the interaction partners’ emotional state and hand gestures are a fast way of passing simple commands. The recognition of all interaction queues is performed in parallel, enabled by a shared memory implementation.
INDEX TERMS
real-time image processing, gesture recognition, human-robot interaction, facial expressions
CITATION
Tobias Rehrl, Alexander Bannat, Jürgen Gast, Frank Wallhoff, Gerhard Rigoll, Christoph Mayer, Zadid Riaz, Bernd Radig, Stefan Sosnowski, Kolja Kühnlenz, "Multiple Parallel Vision-Based Recognition in a Real-Time Framework for Human-Robot-Interaction Scenarios", ACHI, 2010, International Conference on Advances in Computer-Human Interaction, International Conference on Advances in Computer-Human Interaction 2010, pp. 50-55, doi:10.1109/ACHI.2010.44
37 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool