The Community for Technology Leaders
Green Image
<p><b>Abstract</b>—An explosion of on-line image and video data in digital form is already well underway. With the exponential rise in interactive information exploration and dissemination through the World-Wide Web (WWW), the major inhibitors of rapid access to on-line video data are costs and management of capture and storage, lack of real-time delivery, and nonavailability of content-based intelligent search and indexing techniques. The solutions for capture, storage, and delivery may be on the horizon or a little beyond. However, even with rapid delivery, the lack of efficient authoring and querying tools for visual content-based indexing may still inhibit as widespread a use of video information as that of text and traditional tabular data is currently.</p><p>In order to be able to nonlinearly browse and index into videos through visual content, it is necessary to develop authoring tools that can automatically separate moving objects and significant components of the scene, and represent these in a compact form. Given that video data comes in torrents—almost a megabyte every 30th of a second—it will be highly inefficient to search for objects and scenes in every frame of a video. In this paper, we present techniques to automatically derive compact representations of scenes and objects from the motion information.</p><p>Image motion is a significant cue in videos for the separation of scenes into their significant components and for the separation of moving objects. Motion analysis is useful in capturing the visual content of videos for indexing and browsing in two different ways. First, separation of the static scene from moving objects can be accomplished by employing <it>dominant</it> 2D/3D motion estimation methods. Alternatively, if the goal is to be able to represent the fixed scene too as a composition of significant structures and objects, then <it>simultaneous</it> multiple motion methods might be more appropriate. In either case, view-based summarized representations of the scene can be created by video compositing/mosaicing based on the estimated motions. We present robust algorithms for both kinds of representations: 1) dominant motion estimation based techniques which exploit a fairly common occurrence in videos that a mostly fixed background (scene) is imaged with or without independently moving objects, and 2) simultaneous multiple motion estimation and representation of motion video using <it>layered</it> representations. Ample examples of the representations achieved by each method are included in the paper.</p>
Compact video representations, video motion analysis, video mosaics, video indexing, layered motion representations, motion segmentation, robust estimation, mixture models, expectation-maximization (EM) algorithm.

H. S. Sawhney and S. Ayer, "Compact Representations of Videos Through Dominant and Multiple Motion Estimation," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 18, no. , pp. 814-830, 1996.
98 ms
(Ver 3.3 (11022016))