The Community for Technology Leaders
Green Image
Issue No. 01 - January (2011 vol. 33)
ISSN: 0162-8828
pp: 30-42
Pei Yin , Microsoft Corp, Redmond
Antonio Criminisi , Microsoft Research Cambridge, Cambridge
John Winn , Microsoft Research Cambridge, Cambridge
Irfan Essa , Georgia Institute of Technology, Atlanta
This paper presents an automatic segmentation algorithm for video frames captured by a (monocular) webcam that closely approximates depth segmentation from a stereo camera. The frames are segmented into foreground and background layers that comprise a subject (participant) and other objects and individuals. The algorithm produces correct segmentations even in the presence of large background motion with a nearly stationary foreground. This research makes three key contributions: First, we introduce a novel motion representation, referred to as “motons,” inspired by research in object recognition. Second, we propose estimating the segmentation likelihood from the spatial context of motion. The estimation is efficiently learned by random forests. Third, we introduce a general taxonomy of tree-based classifiers that facilitates both theoretical and experimental comparisons of several known classification algorithms and generates new ones. In our bilayer segmentation algorithm, diverse visual cues such as motion, motion context, color, contrast, and spatial priors are fused by means of a conditional random field (CRF) model. Segmentation is then achieved by binary min-cut. Experiments on many sequences of our videochat application demonstrate that our algorithm, which requires no initialization, is effective in a variety of scenes, and the segmentation results are comparable to those obtained by stereo systems.
Computer vision, image understanding, machine learning, decision tree, random forests, boosting, motion analysis.

A. Criminisi, J. Winn, I. Essa and P. Yin, "Bilayer Segmentation of Webcam Videos Using Tree-Based Classifiers," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 33, no. , pp. 30-42, 2010.
87 ms
(Ver 3.3 (11022016))