The Community for Technology Leaders
Green Image
Issue No. 01 - January (2010 vol. 32)
ISSN: 0162-8828
pp: 171-177
Vijay Mahadevan , University of California, San Diego, La Jolla
Nuno Vasconcelos , University of California, San Diego, La Jolla
A spatiotemporal saliency algorithm based on a center-surround framework is proposed. The algorithm is inspired by biological mechanisms of motion-based perceptual grouping and extends a discriminant formulation of center-surround saliency previously proposed for static imagery. Under this formulation, the saliency of a location is equated to the power of a predefined set of features to discriminate between the visual stimuli in a center and a surround window, centered at that location. The features are spatiotemporal video patches and are modeled as dynamic textures, to achieve a principled joint characterization of the spatial and temporal components of saliency. The combination of discriminant center-surround saliency with the modeling power of dynamic textures yields a robust, versatile, and fully unsupervised spatiotemporal saliency algorithm, applicable to scenes with highly dynamic backgrounds and moving cameras. The related problem of background subtraction is treated as the complement of saliency detection, by classifying nonsalient (with respect to appearance and motion dynamics) points in the visual field as background. The algorithm is tested for background subtraction on challenging sequences, and shown to substantially outperform various state-of-the-art techniques. Quantitatively, its average error rate is almost half that of the closest competitor.
Spatiotemporal saliency, background subtraction, dynamic backgrounds, motion saliency, dynamic texture, discriminant center-surround architecture, video modeling.

N. Vasconcelos and V. Mahadevan, "Spatiotemporal Saliency in Dynamic Scenes," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 32, no. , pp. 171-177, 2009.
91 ms
(Ver 3.3 (11022016))