The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.07 - July (2008 vol.30)
pp: 1171-1185
ABSTRACT
We present a novel method for motion segmentation and depth ordering from a video sequence in general motion. We first compute motion segmentation based on differential properties of the spatio-temporal domain, and scale-space integration. Given a motion boundary, we describe two algorithms to determine depth ordering from two- and three- frame sequences. An remarkable characteristic of our method is its ability compute depth ordering from only two frames. The segmentation and depth ordering algorithms are shown to give good results on 6 real sequences taken in general motion. We use synthetic data to show robustness to high levels of noise and illumination changes; we also include cases where no intensity edge exists at the location of the motion boundary, or when no parametric motion model can describe the data. Finally, we describe human experiments showing that people, like our algorithm, can compute depth ordering from only two frames, even when the boundary between the layers is not visible in a single frame.
INDEX TERMS
Image Processing and Computer Vision, Video analysis, Motion, Depth cues, Segmentation
CITATION
Doron Feldman, "Motion Segmentation and Depth Ordering Using an Occlusion Detector", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.30, no. 7, pp. 1171-1185, July 2008, doi:10.1109/TPAMI.2007.70766
REFERENCES
[1] N. Apostoloff and A. Fitzgibbon, “Learning Spatiotemporal T-Junctions for Occlusion Detection,” Proc. Computer Vision and Pattern Recognition, pp. 553-559, 2005.
[2] L. Bergen and F. Meyer, “A Novel Approach to Depth Ordering in Monocular Image Sequences,” Proc. Computer Vision and Pattern Recognition, vol. II, pp. 536-541, 2000.
[3] M.J. Black and D.J. Fleet, “Probabilistic Detection and Tracking of Motion Boundaries,” Int'l J. Computer Vision, vol. 38, no. 3, pp. 231-245, 2000.
[4] G.T. Chou, “A Model of Figure-Ground Segregation from Kinetic Occlusion,” Proc. Int'l Conf. Computer Vision, pp. 1050-1057,
[5] A.W. Cunningham, T.F. Shipley, and P.J. Kellman, “Interactions between Spatial and Spatiotemporal Information in Spatiotemporal Boundary Formation,” Perception and Psychophysics, vol. 60, no. 5, pp. 839-851, 1998.
[6] T. Darrell and D. Fleet, “Second-Order Method for Occlusion Relationships in Motion Layers,” Technical Report 314, MIT Media Lab, 1995.
[7] D. Feldman and D. Weinshall, “Motion Segmentation Using an Occlusion Detector,” Proc. Workshop Dynamical Vision, European Conf. Computer Vision, May 2006.
[8] E. Gamble and T. Poggio, “Visual Integration and Detection of Discontinuities: The Key Role of Intensity Edges,” A.I. Memo 970, AI Lab, MIT, 1987.
[9] C. Harris and M. Stephens, “A Combined Corner and Edge Detector,” Proc. Alvey Vision Conf., pp. 147-151, 1988.
[10] B. Horn and B. Schunck, “Determining Optical Flow,” Artificial Intelligence, vol. 17, pp. 185-203, 1981.
[11] G.A. Kaplan, “Kinetic Disruption of Optical Texture: The Perception of Depth at an Edge,” Perception and Psychophysics, vol. 6, no. 4, pp. 193-198, 1969.
[12] Q. Ke and T. Kanade, “A Subspace Approach to Layer Extraction,” Proc. Computer Vision and Pattern Recognition, pp. 255-262, 2001.
[13] V. Kolmogorov and R. Zabih, “Computing Visual Correspondence with Occlusions Using Graph Cuts,” Proc. Int'l Conf. Computer Vision, vol. 2, pp. 508-515, 2001.
[14] I. Laptev and T. Lindeberg, “Space-Time Interest Points,” Proc. Int'l Conf. Computer Vision, pp. 432-439, 2003.
[15] I. Laptev and T. Lindeberg, “Velocity Adaption of Space-Time Interest Points,” Proc. Int'l Conf. Pattern Recognition, vol. 1, pp. 52-56, 2004.
[16] T. Lindeberg, “Edge Detection and Ridge Detection with Automatic Scale Selection,” Int'l J. Computer Vision, vol. 30, no. 2, pp.117-154, 1998.
[17] B.D. Lucas and T. Kanade, “An Iterative Image Registration Technique with an Application to Stereo Vision,” Proc. Int'l Joint Conf. Artificial Intelligence, pp. 674-679, 1981.
[18] M. Middendorf and H.-H. Nagel, “Estimation and Interpretation of Discontinuities in Optical Flow Fields,” Proc. Int'l Conf. Computer Vision, vol. 1, pp. 178-183, 2001.
[19] D.W. Murray and B.F. Buxton, “Scene Segmentation from Visual Motion Using Global Optimization,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 9, no. 2, pp. 220-228, Mar. 1987.
[20] K.M. Mutch and W.B. Thompson, “Analysis of Accretion and Deletion at Boundaries in Dynamic Scenes,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 7, no. 2, pp. 133-138, 1985.
[21] M. Nicolescu and G. Medioni, “A Voting-Based Computational Framework for Visual Motion Analysis and Interpretation,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 5, pp.739-752, May 2005.
[22] S.A. Niyogi, “Detecting Kinetic Occlusion,” Proc. Int'l Conf. Computer Vision, pp. 1044-1049, 1995.
[23] J.M. Odobez and P. Bouthemy, “MRF-Based Motion Segmentation Exploiting a 2D Motion Model Robust Estimation,” Proc. IEEE Int'l Conf. Image Processing, vol. 3, pp. 628-631, 1995.
[24] A.S. Ogale, C. Fermüller, and Y. Aloimonos, “Motion Segmentation Using Occlusions,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 6, pp. 988-992, June 2005.
[25] H. Pao, D. Geiger, and N. Rubin, “Measuring Convexity for Figure/Ground Separation,” Proc. Int'l Conf. Computer Vision, pp.948-955, 1999.
[26] X. Ren, C.C. Fowlkes, and J. Malik, “Figure/Ground Assignment in Natural Images,” Proc. European Conf. Computer Vision, vol. II, pp. 614-627, 2006.
[27] E. Saund, “Perceptual Organization of Occluding Contours of Opaque Surfaces,” Computer Vision and Image Understanding, vol. 76, no. 1, pp. 70-82, Oct. 1999.
[28] H.S. Sawhney and S. Ayer, “Compact Representations of Videos through Dominant and Multiple Motion Estimation,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 18, no. 8, pp. 814-830, Aug. 1996.
[29] J. Shi and J. Malik, “Motion Segmentation and Tracking Using Normalized Cuts,” Proc. Int'l Conf. Computer Vision, pp. 1154-1160, 1998.
[30] M. Tappen and W.T. Freeman, “Comparison of Graph Cuts with Belief Propagation for Stereo, Using Identical MRF Parameters,” Proc. Int'l Conf. Computer Vision, pp. 900-907, 2003.
[31] W.B. Thompson, K.M. Mutch, and V.A. Berzins, “Dynamic Occlusion Analysis in Optical Flow Fields,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 7, no. 4, pp. 374-383, 1985.
[32] D. Tweed and A. Calway, “Integrated Segmentation and Depth Ordering of Motion Layers in Image Sequences,” Proc. British Machine Vision Conf., 2000.
[33] Y. Weiss, “Smoothness in Layers: Motion Segmentation Estimation Using Nonparametric Mixture,” Proc. Computer Vision and Pattern Recognition, pp. 520-526, 1997.
[34] Y. Weiss and E.H. Adelson, “A Unified Mixture Framework for Motion Segmentation: Incorporating Spatial Coherence and Estimating the Number of Models,” Proc. Computer Vision and Pattern Recognition, pp. 321-326, 1996.
[35] J. Xiao and M. Shah, “Accurate Motion Layer Segmentation and Matting,” Proc. Computer Vision and Pattern Recognition, pp. 698-703, 2005.
[36] A. Yonas, L.G. Craton, and W.B. Thompson, “Relative Motion: Kinetic Information for the Order of Depth at an Edge,” Perception and Psychophysics, vol. 41, no. 1, pp. 53-59, 1987.
5 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool