The Community for Technology Leaders
Green Image
<p><b>Abstract</b>—Hand and face gestures are modeled using an appearance-based approach in which patterns are represented as a vector of similarity scores to a set of view models defined in space and time. These view models are learned from examples using unsupervised clustering techniques. A supervised learning paradigm is then used to interpolate view scores into a task-dependent coordinate system appropriate for recognition and control tasks. We apply this analysis to the problem of context-specific gesture interpolation and recognition, and demonstrate real-time systems which perform these tasks.</p>
Gesture recognition, real-time image processing, expression analysis, view-based representation, spatio-temporal gestures.
Irfan A. Essa, Alex P. Pentland, Trevor J. Darrell, "Task-Specific Gesture Analysis in Real-Time Using Interpolated Views", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 18, no. , pp. 1236-1242, December 1996, doi:10.1109/34.546259
80 ms
(Ver 3.3 (11022016))