The Community for Technology Leaders
Green Image
This paper describes models and algorithms for the real-time segmentation of foreground from background layers in stereo video sequences. Automatic separation of layers from color/contrast or from stereo alone is known to be error-prone. Here, color, contrast, and stereo matching information are fused to infer layers accurately and efficiently. The first algorithm, layered dynamic programming (LDP), solves stereo in an extended six-state space that represents both foreground/background layers and occluded regions. The stereo-match likelihood is then fused with a contrast-sensitive color model that is learned on-the-fly and stereo disparities are obtained by dynamic programming. The second algorithm, layered graph cut (LGC), does not directly solve stereo. Instead, the stereo match likelihood is marginalized over disparities to evaluate foreground and background hypotheses and then fused with a contrast-sensitive color model like the one used in LDP. Segmentation is solved efficiently by ternary graph cut. Both algorithms are evaluated with respect to ground truth data and found to have similar performance, substantially better than either stereo or color/contrast alone. However, their characteristics with respect to computational efficiency are rather different. The algorithms are demonstrated in the application of background substitution and shown to give good quality composite video output
Stereo vision, Dynamic programming, Image segmentation, Computational efficiency, Application software, Computer vision, Spatial coherence, Streaming media, Video sequences, Heuristic algorithms

V. Kolmogorov, A. Criminisi, A. Blake, G. Cross and C. Pother, "Probabilistic fusion of stereo with color and contrast for bilayer segmentation," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 28, no. 9, pp. 1480-1492, 2009.
94 ms
(Ver 3.3 (11022016))