The Community for Technology Leaders
RSS Icon
Issue No.11 - November (2008 vol.30)
pp: 1971-1984
Yael Pritch , The Hebrew University of Jerusalem, Jerusalem
Alex Rav-Acha , Weizmann Institute of Science, Israel
Shmuel Peleg , The Hebrew University of Jerusalem, Jerusalem
The amount of captured video is growing with the increased numbers of video cameras, especially the increase of millions of surveillance cameras that operate 24 hours a day. Since video browsing and retrieval is time consuming, most captured video is never watched or examined. Video synopsis is an effective tool for browsing and indexing of such a video. It provides a short video representation, while preserving the essential activities of the original video. The activity in the video is condensed into a shorter period by simultaneously showing multiple activities, even when they originally occurred at different times. The synopsis video is also an index into the original video by pointing to the original time of each activity. Video Synopsis can be applied to create a synopsis of an endless video streams, as generated by webcams and by surveillance cameras. It can address queries like "Show in one minute the synopsis of this camera broadcast during the past day''. This process includes two major phases: (i) An online conversion of the endless video stream into a database of objects and activities (rather than frames). (ii) A response phase, generating the video synopsis as a response to the user's query.
Image/video retrieval, Video, Computer vision, Motion, Video analysis, Computer vision, Tracking
Yael Pritch, Alex Rav-Acha, Shmuel Peleg, "Nonchronological Video Synopsis and Indexing", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.30, no. 11, pp. 1971-1984, November 2008, doi:10.1109/TPAMI.2008.29
[1] A. Agarwala, M. Dontcheva, M. Agrawala, S. Drucker, A. Colburn, B. Curless, D. Salesin, and M. Cohen, “Interactive Digital Photomontage,” Proc. ACM SIGGRAPH '04, pp. 294-302, 2004.
[2] A. Agarwala, K.C. Zheng, C. Pal, M. Agrawala, M. Cohen, B. Curless, D. Salesin, and R. Szeliski, “Panoramic Video Textures,” Proc. ACM SIGGRAPH '05, pp. 821-827, 2005.
[3] A. Agarwala, “Efficient Gradient-Domain Compositing Using Quadtrees,” Proc. ACM SIGGRAPH '07, 2007.
[4] J. Assa, Y. Caspi, and D. Cohen-Or, “Action Synopsis: Pose Selection and Illustration,” Proc. ACM SIGGRAPH '05, pp. 667-676, 2005.
[5] O. Boiman and M. Irani, “Detecting Irregularities in Images and in Video,” Proc. Int'l Conf. Computer Vision, pp. I: 462-I: 469, 2005.
[6] Y. Boykov, V. Kolmogorov, “An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Vision,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 26, no. 9, pp. 1124-1137, Sept. 2004.
[7] G. Brostow and I. Essa, “Motion Based Decompositing of Video,” Proc. Int'l Conf. Computer Vision, pp. 8-13, 1999.
[8] S. Cohen, “Background Estimation as a Labeling Problem,” Proc. Int'l Conf. Computer Vision, pp. 1034-1041, 2005.
[9] G. Doretto, A. Chiuso, Y. Wu, and S. Soatto, “Dynamic Textures,” Int'l J. Computer Vision, vol. 51, pp. 91-109, 2003.
[10] A.M. Ferman and A.M. Tekalp, “Multiscale Content Extraction and Representation for Video Indexing,” Proc. SPIE, vol. 3229, pp.23-31, 1997.
[11] M. Irani, P. Anandan, J. Bergen, R. Kumar, and S. Hsu, “Efficient Representations of Video Sequences and Their Applications,” Signal Processing: Image Comm., vol. 8, no. 4, pp. 327-351, 1996.
[12] H. Kang, Y. Matsushita, X. Tang, and X. Chen, “Space-Time Video Montage,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 1331-1338, June 2006.
[13] C. Kim and J. Hwang, “An Integrated Scheme for Object-Based Video Abstraction,” ACM Multimedia, pp. 303-311, 2000.
[14] V. Kolmogorov and R. Zabih, “What Energy Functions Can Be Minimized via Graph Cuts?” Proc. Seventh European Conf. Computer Vision, pp. 65-81, 2002.
[15] Y. Li, T. Zhang, and D. Tretter, “An Overview of Video Abstraction Techniques,” Technical Report HPL-2001-191, HP Laboratory, 2001.
[16] Y.-F. Ma, X.-S. Hua, L. Lu, and H. Zhang, “A Generic Framework of User Attention Model and Its Application in Video Summarization,” IEEE Trans. Multimedia, vol. 7, no. 5, pp. 907-919, 2005.
[17] J. Nam and A. Tewfik, “Video Abstract of Video,” Proc. IEEE Third Workshop Multimedia Signal Processing, pp. 117-122, Sept. 1999.
[18] J. Oh, Q. Wen, J. Lee, and S. Hwang, “Video Abstraction,” Video Data Management and Information Retrieval, S. Deb, ed., pp. 321-346, Idea Group Inc. and IRM Press, 2004.
[19] M. Oren, C. Papageorgiou, P. Shinha, E. Osuna, and T. Poggio, “A Trainable System for People Detection,” Proc. Image Understanding Workshop, pp. 207-214, 1997.
[20] M. Gangnet, P. Perez, and A. Blake, “Poisson Image Editing,” Proc. ACM SIGGRAPH '03, pp. 313-318, July 2003.
[21] C. Pal and N. Jojic, “Interactive Montages of Sprites for Indexing and Summarizing Security Video,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, p. II: 1192, 2005.
[22] R. Patil, P. Rybski, T. Kanade, and M. Veloso, “People Detection and Tracking in High Resolution Panoramic Video Mosaic,” Proc. Int'l Conf. Intelligent Robots and Systems, vol. 1, pp. 1323-1328, Oct. 2004.
[23] N. Petrovic, N. Jojic, and T. Huang, “Adaptive Video Fast Forward,” Multimedia Tools and Applications, vol. 26, no. 3, pp.327-344, Aug. 2005.
[24] A. Pope, R. Kumar, H. Sawhney, and C. Wan, “Video Abstraction: Summarizing Video Content for Retrieval and Visualization,” Signals, Systems, and Computers, pp. 915-919, 1998.
[25] Y. Pritch, A. Rav-Acha, A. Gutman, and S. Peleg, “Webcam Synopsis: Peeking Around the World,” Proc. Int'l Conf. Computer Vision, Oct. 2007.
[26] A. Rav-Acha, Y. Pritch, and S. Peleg, “Making a Long Video Short: Dynamic Video Synopsis,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 435-441, June 2006.
[27] A. Rav-Acha, Y. Pritch, D. Lischinski, and S. Peleg, “Dynamosaics: Video Mosaics with Non-Chronological Time,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 10, pp. 1789-1801, Oct. 2007.
[28] C. Rother, L. Bordeaux, Y. Hamadi, and A. Blake, “Autocollage,” ACM Trans. Graphics, vol. 25, no. 3, pp. 847-852, July 2006.
[29] A. Schödl, R. Szeliski, D.H. Salesin, and I. Essa, “Video Textures,” Proc. ACM SIGGRAPH '00, K. Akeley, ed., pp. 489-498, 2000.
[30] A.M. Smith and T. Kanade, “Video Skimming and Characterization through the Combination of Image and Language Understanding,” Proc. Int'l Workshop Content-Based Access of Image and Video Databases, pp. 61-70, 1998.
[31] A. Stefanidis, P. Partsinevelos, P. Agouris, and P. Doucette, “Summarizing Video Datasets in the Spatiotemporal Domain,” Proc. 11th Int'l Workshop Database and Expert Systems Applications, pp. 906-912, 2000.
[32] J. Sun, W. Zhang, X. Tang, and H. Shum, “Background Cut,” Proc. Ninth European Conf. Computer Vision, pp. 628-641, 2006.
[33] M. Yeung and B.-L. Yeo, “Video Visualization for Compact Presentation and Fast Browsing of Pictorial Content,” IEEE Trans. Circuits and Systems for Video Technology, vol. 7, no. 5, pp. 771-785, 1997.
[34] H. Zhong, J. Shi, and M. Visontai, “Detecting Unusual Activity in Video,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 819-826, 2004.
[35] X. Zhu, X. Wu, J. Fan, A.K. Elmagarmid, and W.G. Aref, “Exploring Video Content Structure for Hierarchical Summarization,” Multimedia Systems, vol. 10, no. 2, pp. 98-115, 2004.
18 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool