Issue No. 01 - January (2011 vol. 33)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TPAMI.2010.61
Avinash Ravichandran , The Johns Hopkins University, Baltimore
René Vidal , The Johns Hopkins Univeristy, Baltimore
We consider the problem of spatially and temporally registering multiple video sequences of dynamical scenes which contain, but are not limited to, nonrigid objects such as fireworks, flags fluttering in the wind, etc., taken from different vantage points. This problem is extremely challenging due to the presence of complex variations in the appearance of such dynamic scenes. In this paper, we propose a simple algorithm for matching such complex scenes. Our algorithm does not require the cameras to be synchronized, and is not based on frame-by-frame or volume-by-volume registration. Instead, we model each video as the output of a linear dynamical system and transform the task of registering the video sequences to that of registering the parameters of the corresponding dynamical models. As these parameters are not uniquely defined, one cannot directly compare them to perform registration. We resolve these ambiguities by jointly identifying the parameters from multiple video sequences, and converting the identified parameters to a canonical form. This reduces the video registration problem to a multiple image registration problem, which can be efficiently solved using existing image matching techniques. We test our algorithm on a wide variety of challenging video sequences and show that it matches the performance of significantly more computationally expensive existing methods.
Dynamic textures, video registration, nonrigid dynamical scenes.
A. Ravichandran and R. Vidal, "Video Registration Using Dynamic Textures," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 33, no. , pp. 158-171, 2010.