Subscribe
Issue No.02 - February (2008 vol.30)
pp: 361-367
ABSTRACT
A camera mounted on an aerial vehicle provides an excellent means for monitoring large areas of a scene. Utilizing several such cameras on different aerial vehicles allows further flexibility, in terms of increased visual scope and in the pursuit of multiple targets. In this paper, we address the problem of associating objects across multiple airborne cameras. Since the cameras are moving and often widely separated, direct appearance-based or proximity-based constraints cannot be used. Instead, we exploit geometric constraints on the relationship between the motion of each object across cameras, to test multiple association hypotheses, without assuming any prior calibration information. Given our scene model, we propose a likelihood function for evaluating a hypothesized association between observations in multiple cameras that is geometrically motivated. Since multiple cameras exist, ensuring coherency in association is an essential requirement, e.g. that transitive closure is maintained between more than two cameras. To ensure such coherency we pose the problem of maximizing the likelihood function as a k-dimensional matching and use an approximation to find the optimal assignment of association. Using the proposed error function, canonical trajectories of each object and optimal estimates of inter-camera transformations (in a maximum likelihood sense) are computed. Finally, we show that as a result of associating objects across the cameras, a concurrent visualization of multiple aerial video streams is possible and that, under special conditions, trajectories interrupted due to occlusion or missing detections can be repaired. Results are shown on a number of real and controlled scenarios with multiple objects observed by multiple cameras, validating our qualitative models, and through simulation quantitative performance is also reported.
INDEX TERMS
Applications, Scene Analysis, Motion, Sensor fusion, Registration
CITATION
Yaser Ajmal Sheikh, Mubarak Shah, "Trajectory Association across Multiple Airborne Cameras", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.30, no. 2, pp. 361-367, February 2008, doi:10.1109/TPAMI.2007.70750
REFERENCES
 [1] A. Azarbayejani and A. Pentland, “Real-Time Self-Calibrating Stereo Person Tracking Using 3D Shape Estimation from Blob Features,” Proc. Int'l Conf. Pattern Recognition, 1996. [2] Multitarget-Multisensor Tracking: Advanced Applications, Y. Bar-Shalom, ed. Artech House, 1990. [3] Q. Cai and J.K. Aggarwal, “Tracking Human Motion in Structured Environments Using a Distributed Camera System,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 21, no. 11, pp. 1241-1247, Nov. 1999. [4] T. Chang and S. Gong, “Tracking Multiple People with a Multi-Camera System,” Proc. IEEE Int'l Workshop Multi-Object Tracking, 2001. [5] O. Chum, T. Pajdla, and P. Sturm, “The Geometric Error for Homographies,” Computer Vision and Image Understanding, vol. 97, no. 1, pp. 86-102, Jan. 2005. [6] R. Collins, A. Lipton, H. Fujiyoshi, and T. Kanade, “Algorithms for Cooperative Multisensor Surveillance,” Proc. IEEE, 2001. [7] R. Collins, O. Amidi, and T. Kanade, “An Active Camera System for Acquiring Multi-View Video,” Proc. IEEE Int'l Conf. Image Processing, 2002. [8] A. Criminisi and A. Zisserman, “A Plane Measuring Device,” Proc. British Machine Vision Conf., 1997. [9] T. Darrell, D. Demirdjian, N. Checka, and P. Felzenszwalb, “Plan-View Tracjectory Estimation with Dense Stereo Background Models,” Proc. IEEE Int'l Conf. Computer Vision, 2001. [10] S. Dockstader and A. Tekalp, “Multiple Camera Fusion for Multi-Object Tracking,” Proc. IEEE Int'l Workshop Multi-Object Tracking, 2001. [11] D. Makris, T. Ellis, and J. Black, “Bridging the Gaps between Cameras,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2004. [12] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision. Cambridge Univ. Press, 2000. [13] J. Hopcroft and R. Karp, “A $n^{2.5}$ Algorithm for Maximum Matching in Bi-Partite Graph,” SIAM J. Computing, 1973. [14] T. Huang and S. Russell, “Object Identification in a Bayesian Context,” Proc. Int'l Joint Conf. Artificial Intelligence, 1997. [15] O. Javed, Z. Rasheed, K. Shafique, and M. Shah, “Tracking in Multiple Cameras with Disjoint Views,” Proc. IEEE Int'l Conf. Computer Vision, 2003. [16] O. Javed, K. Shafique, and M. Shah, “Appearance Modeling for Tracking in Multiple Non-Overlapping Cameras,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, 2005. [17] J. Kang, I. Cohen, and G. Medioni, “Continuous Tracking Within and Across Camera Streams,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2003. [18] V. Kettnaker and R. Zabih, “Bayesian Multi-Camera Surveillance,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, 1999. [19] S. Khan and M. Shah, “Consistent Labeling of Tracked Objects in Multiple Cameras with Overlapping Fields of View,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 25, no. 10, pp. 1355-1360, Oct. 2003. [20] J. Krumm, S. Harris, B. Meyers, B. Brumitt, M. Hale, and S. Shafer, “Multi-Camera Multi-Person Tracking for Easy Living,” Proc. IEEE Workshop Visual Surveillance, 2000. [21] H. Kuhn, “The Hungarian Method for Solving the Assignment Problem,” Naval Reserach Logistics Quarterly, 1955. [22] L. Lee, R. Romano, and G. Stein, “Learning Patterns of Activity Using Real-Time Tracking,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 747-757, Aug. 2000. [23] T. Matsuyama and N. Ukita, “Real-Time Multitarget Tracking by a Cooperative Distributed Vision System,” Proc. IEEE, 2002. [24] A. Mittal and L. Davis, “$M_{2}$ Tracker: A Multi-View Approach to Segmenting and Tracking People in a Cluttered Scene,” Int'l J. Computer Vision, 2003. [25] A. Nakazawa, H. Kato, and S. Inokuchi, “Human Tracking Using Distributed Vision Systems,” Proc. Int'l Conf. Pattern Recognition, 1998. [26] C. Papadimitriou, Computational Complexity. Addison Wesley, 1994. [27] A. Rahimi, B. Dunagan, and T. Darrell, “Simultaneous Calibration and Tracking with a Network of Non-Overlapping Sensors,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2004. [28] K. Shafique and M. Shah, “A Noniterative Greedy Algorithm for Multiframe Point Correspondence,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 1, pp. 51-65, Jan. 2005. [29] Y. Shan, H. Sawhney, and R. Kumar, “Unsupervised Learning of Discriminative Edge Measures for Vehicle Matching between Non-Overlapping Cameras,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, 2005. [30] Y. Shan, H. Sawhney, and R. Kumar, “Vehicle Identification between Non-Overlapping Cameras without Direct Feature Matching,” Proc. IEEE Int'l Conf. Computer Vision, 2005. [31] C. Stauffer and K. Tieu, “Automated Multi-Camera Planar Tracking Correspondence Modelling,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2003. [32] P. Sturm, “Vision 3D Non Calibrée—Contributions à la Reconstruction Projective et Étude des Mouvements Critiques pour l'Auto-Calibrage,” PhD thesis, 1997.