The Community for Technology Leaders
Green Image
Issue No. 05 - September/October (2009 vol. 15)
ISSN: 1077-2626
pp: 828-840
Guofeng Zhang , Zhejiang University, Hangzhou
Zilong Dong , Zhejiang University, Hangzhou
Jiaya Jia , The Chinese University of Hong Kong, Hong Kong
Liang Wan , The Chinese University of Hong Kong, Hong Kong
Tien-Tsin Wong , The Chinese University of Hong Kong, Hong Kong
Hujun Bao , Zhejiang University, Hangzhou
Compared to still image editing, content-based video editing faces the additional challenges of maintaining the spatiotemporal consistency with respect to geometry. This brings up difficulties of seamlessly modifying video content, for instance, inserting or removing an object. In this paper, we present a new video editing system for creating spatiotemporally consistent and visually appealing refilming effects. Unlike the typical filming practice, our system requires no labor-intensive construction of 3D models/surfaces mimicking the real scene. Instead, it is based on an unsupervised inference of view-dependent depth maps for all video frames. We provide interactive tools requiring only a small amount of user input to perform elementary video content editing, such as separating video layers, completing background scene, and extracting moving objects. These tools can be utilized to produce a variety of visual effects in our system, including but not limited to video composition, "predator” effect, bullet-time, depth-of-field, and fog synthesis. Some of the effects can be achieved in real time.
Video editing, refilming, depth estimation, composition, background completion, layer separation.

T. Wong, H. Bao, J. Jia, Z. Dong, L. Wan and G. Zhang, "Refilming with Depth-Inferred Videos," in IEEE Transactions on Visualization & Computer Graphics, vol. 15, no. , pp. 828-840, 2009.
84 ms
(Ver 3.3 (11022016))