This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
A Bayesian Framework for Extracting Human Gait Using Strong Prior Knowledge
November 2006 (vol. 28 no. 11)
pp. 1738-1752
Extracting full-body motion of walking people from monocular video sequences in complex, real-world environments is an important and difficult problem, going beyond simple tracking, whose satisfactory solution demands an appropriate balance between use of prior knowledge and learning from data. We propose a consistent Bayesian framework for introducing strong prior knowledge into a system for extracting human gait. In this work, the strong prior is built from a simple articulated model having both time-invariant (static) and time-variant (dynamic) parameters. The model is easily modified to cater to situations such as walkers wearing clothing that obscures the limbs. The statistics of the parameters are learned from high-quality (indoor laboratory) data and the Bayesian framework then allows us to "bootstrap” to accurate gait extraction on the noisy images typical of cluttered, outdoor scenes. To achieve automatic fitting, we use a hidden Markov model to detect the phases of images in a walking cycle. We demonstrate our approach on silhouettes extracted from fronto-parallel ("sideways on”) sequences of walkers under both high-quality indoor and noisy outdoor conditions. As well as high-quality data with synthetic noise and occlusions added, we also test walkers with rucksacks, skirts, and trench coats. Results are quantified in terms of chamfer distance and average pixel error between automatically extracted body points and corresponding hand-labeled points. No one part of the system is novel in itself, but the overall framework makes it feasible to extract gait from very much poorer quality image sequences than hitherto. This is confirmed by comparing person identification by gait using our method and a well-established baseline recognition algorithm.

[1] C. Cédras and M. Shah, “Motion-Based Recognition: A Survey,” Image and Vision Computing, vol. 13, no. 2, pp. 129-155, 1995.
[2] T.B. Moeslund and E. Granum, “A Survey of Computer Vision-Based Human Motion Capture,” Computer Vision and Image Understanding, vol. 81, no. 3, pp. 231-268, 2001.
[3] L. Wang, W. Hu, and T. Tan, “Recent Developments in Human Motion Analysis,” Pattern Recognition, vol. 36, no. 3, pp. 585-601, 2003.
[4] J. O'Rourke and N.I. Badler, “Model Based Image Analysis of Human Motion Using Constraint Propagation,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 2, no. 6, pp. 522-536, 1980.
[5] D.C. Hogg, “Model-Based Vision: A Program to See a Walking Person,” Image and Vision Computing, vol. 1, no. 1, pp. 5-20, 1983.
[6] A.M. Baumberg and D.C. Hogg, “An Efficient Method for Contour Tracking Using Active Shape Models,” Proc. IEEE Workshop Motion of Non-Rigid and Articulated Objects, pp. 194-199, 1994.
[7] K. Rohr, “Towards Model-Based Recognition of Human Movement in Image Sequences,” CVGIP, Image Understanding, vol. 59, no. 1, pp. 94-115, 1994.
[8] S.X. Ju, M.J. Black, and Y. Yacoob, “Cardboard People: A Parameterized Model of Articulated Motion,” Proc. Second Int'l Conf. Automatic Face and Gesture Recognition, pp. 38-44, 1996.
[9] J.K. Aggarwal and Q. Cai, “Human Motion Analysis: A Review,” Computer Vision and Image Understanding, vol. 73, no. 3, pp. 428-440, 1999.
[10] H. Ning, L. Wang, W. Hu, and T. Tan, “Articulated Model Based People Tracking Using Motion Models,” Proc. IEEE Int'l Conf. Multimodal Interfaces, pp. 383-388, 2002.
[11] A. Elgammal, V. Shet, Y. Yacoob, and L.S. Davis, “Learning Dynamics for Exemplar-Based Gesture Recognition,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 571-578, 2003.
[12] A. Elgammal, “Nonlinear Generative Models for Dynamic Shape and Dynamic Appearance,” Proc. Second Int'l Workshop Generative-Model Based Vision, 2004.
[13] X. Lan and D.P. Huttenlocher, “A Unified Spatio-Temporal Articulated Model for Tracking,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 722-729, 2004.
[14] L. Sigal, S. Bhatia, S. Roth, M.J. Black, and M. Isard, “Tracking Loose-Limbed People,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 421-428, 2004.
[15] J. Zhang, R. Collins, and Y. Liu, “Representation and Matching of Articulated Shapes,” Proc. 2004 IEEE Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 342-349, 2004.
[16] K. Toyama and A. Blake, “Probabilistic Tracking with Exemplars in a Metric Space,” Int'l J. Computer Vision, vol. 48, no. 1, pp. 9-19, 2002.
[17] J. Shutler, M. Grant, M.S. Nixon, and J.N. Carter, “On a Large Sequence-Based Human Gait Database,” Proc. Fourth Int'l Conf. Recent Advances in Soft Computing, pp. 66-72, 2002.
[18] M.G. Grant, J.D. Shutler, M.S. Nixon, and J.N. Carter, “Analysis of a Human Extraction System for Deploying Gait Biometrics,” Proc. IEEE Southwest Symp. Image Analysis and Interpretation, pp. 46-50, 2004.
[19] P. Lappas, J.N. Carter, and R.I. Damper, “Robust Evidence-Based Object Tracking,” Pattern Recognition Letters, vol. 23, no. 1-2, pp.253-260, 2002.
[20] M.P. Murray, “Gait as a Total Pattern of Movement,” Am. J. Physical Medicine, vol. 46, no. 1, pp. 290-329, 1967.
[21] D. Meyer, J. Pösl, and H. Niemann, “Gait Classification with HMMs for Trajectories of Body Parts Extracted by Mixture Densities,” Proc. British Machine Vision Conf., pp.459-468, 1998.
[22] L. Lee, G. Dalley, and K. Tieu, “Learning Pedestrian Models for Silhouette Refinement,” Proc. IEEE Int'l Conf. Computer Vision, pp.663-670, 2003.
[23] A. Sundaresan, A.R. Chowdhury, and R. Chellappa, “A Hidden Markov Model Based Framework for Recognition of Humans from Gait Sequences,” Proc. IEEE Int'l Conf. Image Processing, vol. 2, pp. 85-88, 2003.
[24] G. Borgefors, “Hierarchical Chamfer Matching: A Parametric Edge Matching Algorithm,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 10, no. 6, pp. 849-865, June 1988.
[25] D. Gavrila, “Pedestrian Detection for a Moving Vehicle,” Proc. European Conf. Computer Vision, pp. 37-49, 2000.
[26] P.M. Baggenstoss, “The PDF Projection Theorem and the Class-Specific Method,” IEEE Trans. Signal Processing, vol. 51, no. 3, pp.672-685, 2003.
[27] T. Minka, “Exemplar-Based Likelihoods Using the PDF Projection Theorem,” technical report, Microsoft Research Ltd., Cambridge, U.K., Mar. 2004.
[28] A. Thayananthan, R. Navaratnam, P.H.S. Torr, and R. Cipolla, “Likelihood Models for Template Matching Using the PDF Projection Theorem,” Proc. British Machine Vision Conf., 2004.
[29] M.S. Nixon, J.N. Carter, D. Cunado, P.S. Huang, and S.V. Stevenage, “Automatic Gait Recognition,” Biometrics—Personal Identification in Networked Soc., pp. 231-249, 1999.
[30] S. Sarkar, P.J. Phillips, Z. Liu, I.R. Vega, P. Grother, and K.W. Bowyer, “The HumanID Gait Challenge Problem: Data Sets, Performance and Analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 2, pp. 162-177, Feb. 2005.
[31] L. Lee, “Gait Analysis for Classification,” Technical Report 2003-014, Artificial Intelligence Laboratory, Massachusetts Inst. of Technology, Cambridge, Mass., 2003.
[32] P.J. Phillips, H. Moon, S.A. Rizvi, and P.J. Rauss, “The FERET Evaluation Methodology for Face-Recognition Algorithms,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 10, pp.1090-1104, Oct. 2000.

Index Terms:
Bayesian framework, strong prior, articulated motion, human gait, hidden Markov model.
Citation:
Ziheng Zhou, Adam Pr?gel-Bennett, Robert I. Damper, "A Bayesian Framework for Extracting Human Gait Using Strong Prior Knowledge," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 11, pp. 1738-1752, Nov. 2006, doi:10.1109/TPAMI.2006.214
Usage of this product signifies your acceptance of the Terms of Use.