This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Estimating 3D Egomotion from Perspective Image Sequence
November 1990 (vol. 12 no. 11)
pp. 1040-1058

The computation of sensor motion from sets of displacement vectors obtained from consecutive pairs of images is discussed. The problem is investigated with emphasis on its application to autonomous robots and land vehicles. The effects of 3D camera rotation and translation upon the observed image are discussed, particularly the concept of the focus of expansion (FOE). It is shown that locating the FOE precisely is difficult when displacement vectors are corrupted by noise and errors. A more robust performance can be achieved by computing a 2D region of possible FOE locations (termed the fuzzy FOE) instead of looking for a single-point FOE. The shape of this FOE region is an explicit indicator of the accuracy of the result. It has been shown elsewhere that given the fuzzy FOE, a number of powerful inferences about the 3D sense structure and motion become possible. Aspects of computing the fuzzy FOE are emphasized, and the performance of a particular algorithm on real motion sequences taken from a moving autonomous land vehicle is shown.

[1] G. Adiv, "Determining three-dimensional motion and structure from optical flow generated by several moving objects,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-7, no. 4, pp. 384-401, 1985.
[2] P. Anandan, "Computing dense displacement fields with confidence measures in scenes containing occlusion,"SPIE Intell. Robots Comput. Vision, vol. 521, pp. 184-194, 1984.
[3] A. Bandopadhay, B. Chandra, and D. N. Ballard, "Egomotion using active vision," inProc. IEEE Conf. Computer Vision and Pattern Recognition, 1986, pp. 498-503.
[4] S. T. Barnard and W. B. Thompson, "Disparity analysis of images,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-2, no. 4, pp. 333-340, July 1980.
[5] B. Bhanu and W. Burger, "DRIVE--Dynamic reasoning from integrated visual evidence," Honeywell Systems&Research Center, Minneapolis, MN, DARPA Rep. DACA 76-86-C-0017, June 1987.
[6] B. Bhanu and W. Burger, "Qualitative motion detection and tracking of targets from a mobile platform," inProc. DARPA Image Understanding Workshop, Apr. 1988, pp. 289-318.
[7] B. Bhanu et al., "Qualitative Target Motion Detection and Tracking,"Proc. Image Understanding Workshop. Morgan Kaufmann, San Mateo, Calif., 1989, pp. 370-398.
[8] S. Bharwani, E. Riseman, and A. Hanson, "Refinement of environmental depth maps over multiple frames," inProc. IEEE Workshop Motion, Kiawah Island, SC, May 1986, pp. 73-80.
[9] T. Broida and R. Chellappa, "Estimation of object motion parameters from noisy images,"IEEE Trans. Pattern Anal. Machine Intell, vol. PAMI-8, no. 1, Jan. 1986.
[10] A. R. Bruss and B. K. P. Horn, "Passive navigation,"Comput. Vision, Graphics, Image Processing, vol. 21, pp. 3-20, 1983.
[11] W. Burger and B. Bhanu, "Qualitative motion understanding," inProc. Tenth Int. Joint Conf. Artificial Intelligence, IJCAI-87, Milan, Italy, Morgan Kaufmann, Aug. 1987.
[12] W. Burger and B. Bhanu, "Dynamic scene understanding for autonomous mobile robots," inProc. IEEE Conf. Computer Vision and Pattern Recognition, June 1988, pp. 736-741.
[13] W. Burger and B. Bhanu, "On computing a fuzzy focus of expansion for autonomous navigation," inProc. Conf. Comput. Vision Patt. Recogn.(San Diego), 1989, pp. 563-568.
[14] J. Q. Fang and T. S. Huang, "Some experiments on estimating the 3-D motion parameters of a rigid body from two consecutive image frames,"IEEE Trans. Pattern Anal. Machine Intelligence, vol. PAMI-6, no. 5, pp. 545-554, Sept. 1984.
[15] O. D. Faugeras, F. Lustman, and G. Toscani, "Motion and structure from point and line matches," inProc. 1st Int. Conf. Computer Vision, June 1987, pp. 25-34.
[16] R. M. Haralick, "Using perspective transformations in scene analysis,"Comput. Graphics Image Processing, vol. 13, pp. 191-221, 1980.
[17] C. Jerian and R. Jain, "Determining motion parameters for scenes with translation and rotation,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-6, no. 4, pp. 523-530, July 1984.
[18] J. Kim and B. Bhanu, "Motion disparity analysis using adaptive windows," Honeywell Systems&Research Center, Tech. Rep. 87SRC38, June 1987.
[19] D. T. Lawton, "Processing translational motion sequences,"Comput. Vision, Graphics, Image Processing, vol. 22, pp. 114-116, 1983.
[20] D. N. Lee, "The optic flow field: The foundation of vision,"Phil. Trans. Roy. Soc. London B, vol. 290, pp. 169-179, 1980.
[21] H. C. Longuet-Higgins, "A computer algorithm for reconstructing a scene from two projections,"Nature, vol. 293, pp. 133-135, Sept. 1981.
[22] H. C. Longuet-Higgins and K. Prazdny, "The interpretation of a moving retinal image,"Proc. Roy. Soc. London B, vol. 208, pp. 385-397, 1980.
[23] H. P. Moravec, "Towards automatic visual obstacle avoidance," inProc. 5th Int. Joint Conf. Artificial Intelligence, Aug. 1977, p. 584.
[24] K. Prazdny, "Determining the instantaneous direction of motion from optical flow generated by a curvilinear moving observer,"Comput. Graphics Image Processing, vol. 17, pp. 238-248, 1981.
[25] D. Regan, K. Beverly, and M. Cynader, "The visual perception of motion in depth,"Sci. Amer., pp. 136-151, July 1979.
[26] J. H. Rieger, "Information in optical flows induced by curved paths of observation,"J. Opt. Soc. Amer., vol. 73, no. 3, pp. 339-344, Mar. 1983.
[27] J. H. Rieger and D. T. Lawton, "Processing differential image motion,"J. Opt. Soc. Amer. A, vol. 2, no. 2, pp. 354-360, Feb. 1985.
[28] J. W. Roach and J. K. Aggarwal, "Determining the movements of objects from a sequence of images,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-2, no. 6, pp. 554-562, 1980.
[29] B. G. Schunck, "Image flow: Fundamentals and future research," inProc. IEEE Conf. Computer Vision and Pattern Recognition, June 1985, pp. 560-571.
[30] R. Y. Tsai and T. S. Huang, "Uniqueness and estimation of three-dimensional motion parameters of rigid objects with curved surfaces,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-6, no. 1, pp. 13-27, Jan. 1984.
[31] S. Ullman,The Interpretation of Visual Motion. Cambridge, MA: MIT Press, 1979.
[32] A. Verri and T. Poggio, "Qualitative information in the optical flow," inProc. DARPA Image Understanding Workshop, Los Angeles, CA, Feb. 1987, pp. 825-834.
[33] J. Weng, T. S. Huang, and N. Ahuja, "Error analysis of motion parameter estimation from image sequences," inProc. 1st Int. Conf. Computer Vision, June 1987, pp. 703-707.

Index Terms:
motion estimation; camera translation; FOE location; 3D egomotion; perspective image sequence; sensor motion; displacement vectors; autonomous robots; land vehicles; 3D camera rotation; focus of expansion; noise; errors; fuzzy FOE; mobile robots; pattern recognition; picture processing; vehicles
Citation:
W. Burger, B. Bhanu, "Estimating 3D Egomotion from Perspective Image Sequence," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 11, pp. 1040-1058, Nov. 1990, doi:10.1109/34.61704
Usage of this product signifies your acceptance of the Terms of Use.