This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
Fusion of range and intensity information for view invariant gesture recognition
Anchorage, AK, USA
June 23-June 28
ISBN: 978-1-4244-2339-2
M.B. Holte, Computer Vision and Media Technology Laboratory, Aalborg University, Denmark
T.B. Moeslund, Computer Vision and Media Technology Laboratory, Aalborg University, Denmark
P. Fihl, Computer Vision and Media Technology Laboratory, Aalborg University, Denmark
This paper presents a system for view invariant gesture recognition. The approach is based on 3D data from a CSEM SwissRanger SR-2 camera. This camera produces both a depth map as well as an intensity image of a scene. Since the two information types are aligned, we can use the intensity image to define a region of interest for the relevant 3D data. This data fusion improves the quality of the range data and hence results in better recognition. The gesture recognition is based on finding motion primitives in the 3D data. The primitives are represented compactly and view invariant using harmonic shape context. A probabilistic Edit Distance classifier is applied to identify which gesture best describes a string of primitives. The approach is trained on data from one viewpoint and tested on data from a different viewpoint. The recognition rate is 92.9% which is similar to the recognition rate when training and testing on gestures from the same viewpoint, hence the approach is indeed view invariant.
Citation:
M.B. Holte, T.B. Moeslund, P. Fihl, "Fusion of range and intensity information for view invariant gesture recognition," cvprw, pp.1-7, 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2008
Usage of this product signifies your acceptance of the Terms of Use.