This Article 
 Bibliographic References 
 Add to: 
Stereo-Motion with Stereo and Motion in Complement
February 2000 (vol. 22 no. 2)
pp. 215-220

Abstract—This paper presents a new approach of combining stereo vision and dynamic vision with the objective of retaining their advantages and removing their disadvantages. It is shown that, by assuming affine cameras, the stereo correspondences and motion correspondences, if organized in a particular way in a matrix, can be decomposed into the 3D structure of the scene, the camera parameters, the motion parameters, and the stereo geometry. With this, the approach can infer stereo correspondences from motion correspondences, requiring only a time linear with respect to the size of the available image data. The approach offers the advantages of simpler correspondence, as in dynamic vision, and accurate reconstruction, as in stereo vision, even with short image sequences.

[1] P. Balasubramanyam and M.A. Snyder, “The P-Field: A Computational Model for Binocular Motion Processing,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 115-120, Maui, Hawaii, June 1991.
[2] R. Chung and S.-K. Wong, “Stereo Calibration from Correspondences of OTV Projections,” IEE Proc.: Vision, Image, and Signal Processing, vol. 142, no. 5, pp. 289-296, Oct. 1995.
[3] J. Costeira and T. Kanade, “A Multibody Factorization Method for Independently Moving Objects,” Int'l J. Computer Vision, vol. 29, no. 31, pp. 159-179, 1998.
[4] U. Dhond and J.K. Aggarwal, "Structure From Stereo—A Review," IEEE Trans. Systems, Man, and Cybernetics, vol. 19, no. 6, pp. 1,489-1,510, Nov. 1989.
[5] O. Faugeras, “Stratification of Three-Dimensional Vision: Projective, Affine, and Metric Representations,” J. Op. Soc. Am.-A, vol. 12, no. 3, pp. 465-484, Mar. 1995.
[6] A. Ho and T. Pong, “Cooperative Fusion of Stereo and Motion,” Pattern Recognition, Jan. 1996.
[7] G.A. Jones, “Constraint, Optimization, and Hierarchy: Reviewing Stereoscopic Correspondence of Complex Features,” Computer Vision and Image Understanding, vol. 65, no. 1, pp. 57-78, Jan. 1997.
[8] S. Maybank, Theory of Reconstruction from Image Motion. Berlin: Springer-Verlag, 1993.
[9] A. Mitiche, “A Computational Approach to the Fusion of Stereo and Kineopsis,” Motion Understanding: Robot and Human Vision, W.N. Martin and J.K. Aggarwal, eds., pp. 81-95, Kluwer Academic, 1988.
[10] T. Morita and T. Kanade, A Sequential Factorization Method for Recovering Shape and Motion from Image Streams IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 8, pp. 858-867, Aug. 1997.
[11] M. Okutomi and T. Kanade, “A Multiple-Baseline Stereo,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 15, no. 4, pp. 353-363, Apr. 1993.
[12] C.J. Poelman and T. Kanade, "A Paraperspective Factorization Method for Shape and Motion Recovery," Computer Vision—ECCV 94, Proc. Third European Conf. Computer Vision, J.-O. Eklundh, ed., vol. 2, pp. 97-108.Stockholm: Springer Verlag, May 1994.
[13] J. Shi and C. Tomasi, Good Features to Track Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 593-600, 1994.
[14] C. Tomasi and T. Kanade, "Shape and Motion From Image Streams Under Orthography: A Factorization Method," Int'l J. Computer Vision, vol. 9, no. 2, pp. 137-154, 1992.
[15] A.M. Waxman and J.H. Duncan,“Binocular image flows: Steps toward stereo-motion fusion,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp. 715-729, 1986.
[16] Z. Zhang and O.D. Faugeras, "Three-dimensional motion computation and object segmentation in a long sequence of stereo frames," Int'l J. Computer Vision, vol. 7, no. 3, pp. 211-241, 1992.
[17] Z. Zhang, O.D. Faugeras, and N. Ayache, “Analysis of a Sequence of Stereo Scenes Containing Multiple Moving Objects Using Ridigity Constraints,” Proc. IEEE Int'l Conf. Computer Vision, Tampa, Fla., 1988.

Index Terms:
Stereo-motion, 3D reconstruction, affine cameras.
Pui-Kuen Ho, Ronald Chung, "Stereo-Motion with Stereo and Motion in Complement," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 2, pp. 215-220, Feb. 2000, doi:10.1109/34.825760
Usage of this product signifies your acceptance of the Terms of Use.