CSDL Home IEEE Transactions on Visualization & Computer Graphics 2009 vol.15 Issue No.05 - September/October
Issue No.05 - September/October (2009 vol.15)
Zilong Dong , Zhejiang University, Hangzhou
Jiaya Jia , The Chinese University of Hong Kong, Hong Kong
Liang Wan , The Chinese University of Hong Kong, Hong Kong
Tien-Tsin Wong , The Chinese University of Hong Kong, Hong Kong
Guofeng Zhang , Zhejiang University, Hangzhou
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TVCG.2009.47
Compared to still image editing, content-based video editing faces the additional challenges of maintaining the spatiotemporal consistency with respect to geometry. This brings up difficulties of seamlessly modifying video content, for instance, inserting or removing an object. In this paper, we present a new video editing system for creating spatiotemporally consistent and visually appealing refilming effects. Unlike the typical filming practice, our system requires no labor-intensive construction of 3D models/surfaces mimicking the real scene. Instead, it is based on an unsupervised inference of view-dependent depth maps for all video frames. We provide interactive tools requiring only a small amount of user input to perform elementary video content editing, such as separating video layers, completing background scene, and extracting moving objects. These tools can be utilized to produce a variety of visual effects in our system, including but not limited to video composition, "predator” effect, bullet-time, depth-of-field, and fog synthesis. Some of the effects can be achieved in real time.
Video editing, refilming, depth estimation, composition, background completion, layer separation.
Zilong Dong, Jiaya Jia, Liang Wan, Tien-Tsin Wong, Guofeng Zhang, "Refilming with Depth-Inferred Videos", IEEE Transactions on Visualization & Computer Graphics, vol.15, no. 5, pp. 828-840, September/October 2009, doi:10.1109/TVCG.2009.47