The Community for Technology Leaders
RSS Icon
Issue No.07 - July (2013 vol.19)
pp: 1218-1227
We present a video editing technique based on changing the timelines of individual objects in video, which leaves them in their original places but puts them at different times. This allows the production of object-level slow motion effects, fast motion effects, or even time reversal. This is more flexible than simply applying such effects to whole frames, as new relationships between objects can be created. As we restrict object interactions to the same spatial locations as in the original video, our approach can produce high-quality results using only coarse matting of video objects. Coarse matting can be done efficiently using automatic video object segmentation, avoiding tedious manual matting. To design the output, the user interactively indicates the desired new life spans of objects, and may also change the overall running time of the video. Our method rearranges the timelines of objects in the video whilst applying appropriate object interaction constraints. We demonstrate that, while this editing technique is somewhat restrictive, it still allows many interesting results.
Electron tubes, Trajectory, Visualization, Optimization, Motion control, Time reversal,time reversal, Object-level motion editing, foreground/background reconstruction, slow motion, fast motion
"Timeline Editing of Objects in Video", IEEE Transactions on Visualization & Computer Graphics, vol.19, no. 7, pp. 1218-1227, July 2013, doi:10.1109/TVCG.2012.145
[1] G.R. Bradski, “Computer Vision Face Tracking for Use in a Perceptual User Interface,” Intelligence Technical J., vol. 2, pp. 12-21, 1998.
[2] D.B. Goldman, C. Gonterman, B. Curless, D. Salesin, and S.M. Seitz, “Video Object Annotation, Navigation, and Composition,” Proc. 21st Ann. ACM Symp. User Interface Software and Technology, pp. 3-12, Oct. 2008.
[3] X.L.K. Wei and J.X. Chai, “Interactive Tracking of 2D Generic Objects with Spacetime Optimization,” Proc. 10th European Conf. Computer Vision: Part I, pp. 657-670, 2008.
[4] A. Schodl and I.A. Essa, “Controlled Animation of Video Sprites,” Proc. ACM SIGGRAPH/Eurographics Symp. Computer Animation, pp. 121-127, 2002.
[5] S. Yeung, C. Tang, M. Brown, and S. Kang, “Matting and Compositing of Transparent and Refractive Objects,” ACM Trans. Graphics, vol. 30, no. 1, p. 2, 2011.
[6] K. He, C. Rhemann, C. Rother, X. Tang, and J. Sun, “A Global Sampling Method for Alpha Matting,” Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR). pp. 2049-2056, 2011.
[7] Y. Zhang and R. Tong, “Environment-Sensitive Cloning in Images,” The Visual Computer, pp. 1-10, 2011.
[8] Z. Tang, Z. Miao, Y. Wan, and D. Zhang, “Video Matting via Opacity Propagation,” The Visual Computer, pp. 1-15, 2011.
[9] M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active Contour Models,” Int'l J. Computer Vision, vol. 1, pp. 321-331, 1988.
[10] Y.-Y. Chuang, A. Agarwala, B. Curless, D.H. Salesin, and R. Szeliski, “Video Matting of Complex Scenes,” ACM Trans. Graphics, vol. 21, pp. 243-248, July 2002.
[11] Y. Li, J. Sun, and H.-Y. Shum, “Video Object Cut and Paste,” ACM Trans. Graphics, vol. 24, pp. 595-600, July 2005.
[12] J. Wang, P. Bhat, R.A. Colburn, M. Agrawala, and M.F. Cohen, “Interactive Video Cutout,” ACM Trans. Graphics, vol. 24, pp. 585-594, July 2005.
[13] X. Bai, J. Wang, D. Simons, and G. Sapiro, “Video Snapcut: Robust Video Object Cutout Using Localized Classifiers,” ACM Trans. Graphics, vol. 28, pp. 70:1-70:11, July 2009.
[14] T. Kwon, K.H. Lee, J. Lee, and S. Takahashi, “Group Motion Editing,” ACM Trans. Graphics, vol. 27, no. 3, pp. 80:1-80:8, Aug. 2008.
[15] Y. Li, T. Zhang, and D. Tretter, “An Overview of Video Abstraction Techniques,” Technical Report HP-2001-191, HP Laboratory, 2001.
[16] B.T. Truong and S. Venkatesh, “Video Abstraction: A Systematic Review and Classification,” ACM Trans. Multimedia Computing, Comm., and Applications, vol. 3, pp. 1-37, 2007.
[17] C.W. Ngo, Y.F. Ma, and H.J. Zhang, “Automatic Video Summarization by Graph Modeling,” Proc. IEEE Ninth Int'l Conf. Computer Vision, pp. 104-109, 2003.
[18] H.W. Kang, X.Q. Chen, Y. Matsushita, and X. Tang, “Space-Time Video Montage,” Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pp. 1331-1338, 2006.
[19] C. Barnes, D.B. Goldman, E. Shechtman, and A. Finkelstein, “Video Tapestries with Continuous Temporal Zoom,” ACM Trans. Graphics, vol. 29, pp. 89:1-89:9, July 2010.
[20] Z. Li, P. Ishwar, and J. Konrad, “Video Condensation by Ribbon Carving,” IEEE Trans. Image Processing, vol. 18, no. 11, pp. 2572-2583, Nov. 2009.
[21] K. Slot, R. Truelsen, and J. Sporring, “Content-Aware Video Editing in the Temporal Domain,” Proc. 16th Scandinavian Conf. Image Analysis, pp. 490-499, 2009.
[22] B. Chen and P. Sen, “Video Carving,” Proc. Eurographics '08, 2008.
[23] S. Pongnumkul, J. Wang, G. Ramos, and M.F. Cohen, “Content-Aware Dynamic Timeline for Video Browsing,” Proc. 23nd Ann. ACM Symp. User Interface Software and Technology, pp. 139-142, 2010.
[24] T. Karrer, M. Weiss, E. Lee, and J. Borchers, “Dragon: A Direct Manipulation Interface for Frame-Accurate in-Scene Video Navigation,” Proc. 26th Ann. SIGCHI Conf. Human Factors in Computing Systems, pp. 247-250, Apr. 2008.
[25] C. Liu, A. Torralba, W.T. Freeman, F. Durand, and E.H. Adelson, “Motion Magnification,” ACM Trans. Graphics, vol. 24, no. 3, pp. 519-526, July 2005.
[26] J. Chen, S. Paris, J. Wang, W. Matusik, M. Cohen, and F. Durand, “The Video Mesh: A Data Structure for Image-Based Video Editing,” Proc. IEEE Int'l Conf. Computational Photography, pp. 1-8, 2011.
[27] V. Scholz, S. El-Abed, H.-P. Seidel, and M.A. Magnor, “Editing Object Behaviour in Video Sequences,” Computer Graphics Forum, vol. 28, no. 6, pp. 1632-1643, 2009.
[28] A. Rav-Acha, Y. Pritch, D. Lischinski, and S. Peleg, “Evolving Time Fronts: Spatio-Temporal Video Warping,” Technical Report HUJI-CSE-LTR-2005-10, Hebrew Univ., Apr. 2005.
[29] A. Rav-Acha, Y. Pritch, D. Lischinski, and S. Peleg, “Dynamosaicing: Mosaicing of Dynamic Scenes,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 10, pp. 1789-1801, Oct. 2007.
[30] C.D. Correa and K.-L. Ma, “Dynamic Video Narratives,” ACM Trans. Graphics, vol. 29, pp. 88:1-88:9, July 2010.
[31] E.P. Bennett and L. McMillan, “Computational Time-Lapse Video,” ACM Trans. Graphics, vol. 26, pp. 102-108, July 2007.
[32] D.B. Goldman, B. Curless, D. Salesin, and S.M. Seitz, “Schematic Storyboarding for Video Visualization and Editing,” ACM Trans. Graphics, vol. 25, pp. 862-871, Jul. 2006.
[33] Y. Pritch, A. Rav-Acha, and S. Peleg, “Nonchronological Video Synopsis and Indexing,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 30, no. 11, pp. 1971-1984, Nov. 2008.
[34] M. Brown and D.G. Lowe, “Recognising Panoramas,” Proc. IEEE Ninth Int'l Conf. Computer Vision, pp. 1218-1227, 2003.
[35] M. Grant and S. Boyd, “CVX: Matlab Software for Disciplined Convex Programming, Version 1.21,” http://cvxr.comcvx, Dec. 2010.
[36] Y. Weng, W. Xu, S. Hu, J. Zhang, and B. Guo, “Keyframe Based Video Object Deformation,” Proc. Int'l Conf. Cyberworlds, pp. 142-149, 2008.
[37] K. Peker, A. Divakaran, and H. Sun, “Constant Pace Skimming and Temporal Sub-Sampling of Video Using Motion Activity,” Proc. IEEE Int'l Conf. Image Processing, pp. 414-417, 2001.
[38] F. Liu, M. Gleicher, J. Wang, H. Jin, and A. Agarwala, “Subspace Video Stabilization,” ACM Trans. Graphics, vol. 30, no. 1,article 4, 2011.
[39] Z. Farbman and D. Lischinski, “Tonal Stabilization of Video,” ACM Trans. Graphics, vol. 30, no. 4, pp. 89:1-89:9, 2011.
[40] A. Finkelstein, C.E. Jacobs, and D.H. Salesin, “Multiresolution Video,” Proc. ACM SIGGRAPH '96, pp. 281-290, 1996.
176 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool