The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - March (2010 vol.32)
pp: 517-529
Stan Sclaroff , Boston University, Boston
Walter Nunziati , Media Integration and Communication Center, Firenze
ABSTRACT
Identifying correspondences between trajectory segments observed from nonsynchronized cameras is important for reconstruction of the complete trajectory of moving targets in a large scene. Such a reconstruction can be obtained from motion data by comparing the trajectory segments and estimating both the spatial and temporal alignments. Exhaustive testing of all possible correspondences of trajectories over a temporal window is only viable in the cases with a limited number of moving targets and large view overlaps. Therefore, alternative solutions are required for situations with several trajectories that are only partially visible in each view. In this paper, we propose a new method that is based on view-invariant representation of trajectories, which is used to produce a sparse set of salient points for trajectory segments observed in each view. Only the neighborhoods at these salient points in the view--invariant representation are then used to estimate the spatial and temporal alignment of trajectory pairs in different views. It is demonstrated that, for planar scenes, the method is able to recover with good precision and efficiency both spatial and temporal alignments, even given relatively small overlap between views and arbitrary (unknown) temporal shifts of the cameras. The method also provides the same capabilities in the case of trajectories that are only locally planar, but exhibit some nonplanarity at a global level.
INDEX TERMS
Registration, invariants, similarity measures, cross ratio.
CITATION
Stan Sclaroff, Walter Nunziati, "Matching Trajectories between Video Sequences by Exploiting a Sparse Projective Invariant Representation", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.32, no. 3, pp. 517-529, March 2010, doi:10.1109/TPAMI.2009.35
REFERENCES
[1] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision. Cambridge Univ. Press, 2004.
[2] M. Irani and P. Anandan, All About Direct Methods. Springer-Verlag, 1999.
[3] L. Lee, R. Romano, and G. Stein, “Monitoring Activities from Multiple Video Streams: Establishing a Common Coordinate Frame,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 758-767, Aug. 2000.
[4] C. Stauffer and K. Tieu, “Automated Multi-Camera Planar Tracking Correspondence Modeling,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, 2003.
[5] Y. Caspi, D. Simakov, and M. Irani, “Feature-Based Sequence-to-Sequence Matching,” Int'l J. Computer Vision, vol. 68, no. 1, pp. 53-64, 2006.
[6] Y. Sheikh and M. Shah, “Object Tracking across Multiple Independently Moving Cameras,” Proc. Int'l Conf. Computer Vision, 2005.
[7] Y. Sheikh, A. Gritai, and M. Shah, “On the Spacetime Geometry of Galilean Cameras,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, 2007.
[8] C. Rao and M. Shah, “View Invariance in Action Recognition,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, 2001.
[9] S. Khan and M. Shah, “Consistent Labeling of Tracked Objects in Multiple Cameras with Overlapping Fields of View,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 25, no. 10, pp. 1355-1360, Oct. 2003.
[10] S. Dagtas, W. Al Khatib, A. Ghafoor, and R.L. Kashyap, “Models for Motion-Based Video Indexing and Retrieval,” IEEE Trans. Image Processing, vol. 9, no. 1, pp. 88-101, Jan. 2000.
[11] L. Chen, M.T. Özsu, and V. Oria, “Symbolic Representation and Retrieval of Moving Object Trajectories,” Proc. ACM SIGMM Int'l Workshop Multimedia Information Retrieval, 2004.
[12] J.-W. Hsieh, S.-L. Yu, and Y.-S. Chen, “Motion-Based Video Retrieval by Trajectory Matching,” IEEE Trans. Circuits and Systems for Video Technology, vol. 16, no. 3, pp. 396-409, Mar. 2006.
[13] F. Bashir, A. Khokhar, and D. Schonfeld, “A Hybrid System for Affine-Invariant Trajectory Retrieval,” Proc. ACM SIGMM Int'l Workshop Multimedia Information Retrieval, 2004.
[14] W. Nunziati, S. Sclaroff, and A. Del Bimbo, “Matching Trajectories across Videos with Semi-Local Projective Invariant Features,” Proc. Int'l Conf. Image and Video Retrieval, 2005.
[15] W. Nunziati, J. Alon, S. Sclaroff, and A. Del Bimbo, “View Registration Using Interesting Segments of Planar Trajectories,” Proc. IEEE Int'l Conf. Advanced Video Surveillance, 2005.
[16] S. Carlsson et al., “Semi-Local Projective Invariants for the Recognition of Smooth Plane Curves,” Int'l J. Computer Vision, vol. 19, pp. 211-236, 1996.
[17] C. Rothwell, A. Zisserman, D. Forsyth, and J. Mundy, “Planar Object Recognition Using Projective Shape Representation,” Int'l J. Computer Vision, vol. 16, pp. 57-99, 1995.
[18] Geometric Invariance in Computer Vision, J. Mundy and A.Zisserman, eds. MIT Press, 1992.
[19] P. Gros, “How to Use the Cross Ratio to Compute Projective Invariants from Two Images,” Proc. Application of Invariance in Computer Vision, 1994.
[20] G.J. Semple and G.K. Kneebone, Algebraic Projective Geometry. Oxford Univ. Press, 1952.
[21] T. Suk and J. Flusser, “Point Projective and Permutation Invariants,” Proc. Computer Analysis of Images and Patterns, 1997.
[22] P. Meer, R. Lenz, and S. Ramakrishna, “Efficient Invariant Representations,” Int'l J. Computer Vision, vol. 26, pp. 137-152, 1998.
[23] I. Weiss, “Differential Invariants without Derivatives,” Proc. Int'l Conf Image Processing, 1992.
[24] K. Åstrom and L. Morin, “Random Cross Ratios,” Report RT 88 IMAG-LIFIA, 1995.
[25] T. Kadir and M. Brady, “Scale, Saliency and Image Description,” Int'l J. Computer Vision, vol. 45, pp. 83-105, 2001.
[26] http://peipa.essex.ac.uk/ipa/pix/petspets2001 /, vs-pets Dataset, 2001.
[27] D. Buzan, S. Sclaroff, and G. Kollios, “Extraction and Clustering of Motion Trajectories in Video,” Proc. Int'l Conf. Pattern Recognition, vol. 2, 2004.
8 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool