This Article 
 Bibliographic References 
 Add to: 
Active/Dynamic Stereo Vision
September 1995 (vol. 17 no. 9)
pp. 868-879

Abstract—Visual navigation is a challenging issue in automated robot control. In many robot applications, like object manipulation in hazardous environments or autonomous locomotion, it is necessary to automatically detect and avoid obstacles while planning a safe trajectory. In this context the detection of corridors of free space along the robot trajectory is a very important capability which requires nontrivial visual processing. In most cases it is possible to take advantage of the active control of the cameras.

In this paper we propose a cooperative schema in which motion and stereo vision are used to infer scene structure and determine free space areas. Binocular disparity, computed on several stereo images over time, is combined with optical flow from the same sequence to obtain a relative-depth map of the scene. Both the time-to-impact and depth scaled by the distance of the camera from the fixation point in space are considered as good, relative measurements which are based on the viewer, but centered on the environment.

The need for calibrated parameters is considerably reduced by using an active control strategy. The cameras track a point in space independently of the robot motion and the full rotation of the head, which includes the unknown robot motion, is derived from binocular image data.

The feasibility of the approach in real robotic applications is demonstrated by several experiments performed on real image data acquired from an autonomous vehicle and a prototype camera head.

[1] R.D. Beer,Intelligence as Adaptive Behavior. Academic Press, 1990.
[2] R. Brooks,“A robust layered control system for a mobile robot,” IEEE J. of Robotics and Automation, vol. 2, no. 1, pp. 14-23, 1986.
[3] F. Ferrari,E. Grosso,M. Magrassi,, and G. Sandini,“A stereo vision systemfor real time obstacle avoidance in unknown environment,” Proc. Int’l Workshop Intelligent Robots and Systems,Tokyo, July 1990.
[4] G. Sandini and M. Tistarelli,“Robust obstacle detection using optical flow,” Proc. IEEE Int’l Workshop Robust Computer Vision, pp. 396-411,Seattle, Oct.1-3, 1990.
[5] M. Tistarelli and G. Sandini,“Dynamic aspects in active vision,” CVGIP: Image Understanding, vol. 56, no. 1, pp. 108-192, 1992.
[6] J. Aloimonos,I. Weiss,, and A. Bandyopadhyay,“Active vision,” Int’l J. Computer Vision, vol. 1, no. 4, pp. 333-356, 1988.
[7] G. Sandini and M. Tistarelli,“Active tracking strategy for monocular depth inference over multiple frames,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 12, no. 1, pp. 13-27, 1990.
[8] R.K. Bajcsy,“Active perception vs passive perception,” Proc. Third IEEE CS Workshop Computer Vision: Representation and Control, pp. 13-16,Bellaire, Mich., 1985.
[9] D.H. Ballard, “Animate Vision,” Artificial Intelligence, vol. 48, pp. 57-86, 1991.
[10] D.H. Ballard and C.M. Brown,“Principles of animate vision,” CVGIP, Special issue on Purposive and Qualitative Active Vision, Y. Aloimonos, ed., vol. 56, no. 1, pp. 3-21, July 1992.
[11] N.J. Bridwell and T.S. Huang,“A discrete spatial representation for lateral motion stereo,” CVGIP, vol. 21, pp. 33-57, 1983.
[12] N. Ayache and O. Faugeras, “Maintaining Representations of the Environment of a Mobile Robot,” IEEE Trans. Robotics and Automation, vol. 5, no. 6, pp. 804-819, 1989.
[13] D.J. Kriegman, E. Triendl, and T.O. Binford, “Stereo Vision and Navigation in Buildings for Mobile Robots,” IEEE Trans. Robotics and Automation, vol. 5, no. 6, pp. 792-803, 1989.
[14] L. Matthies,T. Kanade,, and R. Szeliski,“Kalman filter-based algorithms for estimating depth from image sequences,” Int’l J. Computer Vision, vol. 3, no. 3, pp. 209-238, 1989.
[15] N. Ahuja and L. Abbott,“Surfaces from dynamic stereo: Integrating camera vergence,focus, and calibration with stereo surface reconstruction,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 15, no. 10, pp. 1,007-1,029, Oct. 1993.
[16] E. Grosso,G. Sandini,, and M. Tistarelli,“3D object reconstruction using stereo and motion,” IEEE Trans. Systems, Man, and Cybernetics, vol. 19, no. 6 Nov./Dec. 1989.
[17] R.A. Brooks,A.M. Flynn,, and T. Marill,“Self calibration of motion and stereo vision for mobile robot navigation,” Proc. DARPA Workshop Image Understanding, pp. 398-410, Morgan and Kaufman, eds., 1988.
[18] A.M. Waxman and J.H. Duncan,“Binocular image flows: Steps toward stereo-motion fusion,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp. 715-729, 1986.
[19] L. Li and J.H. Duncan,“3D translational motion and structure frombinocular image flows,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 15, no. 7, pp. 657-667, July 1993.
[20] M. Tistarelli,E. Grosso,, and G. Sandini,“Dynamic stereo in visual navigation,” Proc. Int’l Conf. Computer Vision and Pattern Recognition, pp. 186-193,Lahaina, Hawaii, June 1991.
[21] E. Grosso,M. Tistarelli,, and G. Sandini,“Active-dynamic stereo for navigation,” Proc. Second European Conf. Computer Vision, pp. 516-525,S. Margherita Ligure, Italy, May 1992.
[22] A. Izaguirre,P. Pu,, and J. Summers,“A new development in cameracalibration: Calibrating a pair of mobile cameras,” The Int’l J. Robotics Research, pp. 104-116, 1988.
[23] Y.L. Chang,P. Liang,, and S. Hackwood,“Adaptive self-calibration of vision-based robot systems,” IEEE Trans. Systems, Man, and Cybernetics, vol. 19, no. 4, Jul./Aug. 1989.
[24] A.P. Tirumalai,B.G. Schunck,, and R.C. Jain,“Dynamic stereo with self-calibration,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 14, no. 12, pp. 1,184-1,189, Dec. 1992.
[25] L. Matthies and T. Kanade,“Using uncertainty models in visual motion anddepth estimation,” Proc. Fourth Int’l Symp. Robotics Research, pp. 120-138,Santa Cruz, Calif., Aug.9-14, 1987.
[26] R.C. Nelson and J. Aloimonos,“Using flow field divergence for obstacleavoidance in visual navigation,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 11, no. 10, pp. 1,102-1,106, Oct. 1989.
[27] D.H. Ballard,R.C. Nelson,, and B. Yamauchi,“Animate vision,” Optics News, vol. 15, no. 5, pp. 17-25, 1989.
[28] B.K. Horn, Robot Vision. Cambridge, Mass.: MIT Press, 1986.
[29] P. Puget and T. Skordas,“Calibrating a mobile camera,” Image and Vision Computing, vol. 8, no. 4, pp. 341-348, 1990.
[30] D.N. Lee and P E. Reddish,“Plummeting gannets: A paradigm of ecological optics,” Nature, vol. 293, pp. 293-294, 1981.
[31] J. Cutting,Perception with an Eye for Motion.Cambridge, Mass.: MIT Press, 1988.
[32] B.K.P. Horn and B.G. Schunck,“Determining optical flow,” Artificial Intelligence, vol. 17, no. 1-3, pp. 185-204, 1981.
[33] H.H. Nagel,“Direct estimation of optical flow and of its derivatives,” Artificial and Biological Vision Systems, G.A. Orban and H.H. Nagel, eds., pp. 193-224, Springer Verlag, 1992.
[34] H.-H. Nagel, “On the Estimation of Optical Flow: Relations between Different Approaches and Some New Results,” Artificial Intelligence, vol. 33, pp. 299-324, 1987.
[35] S. Uras,F. Girosi,A. Verri,, and V. Torre,“A computational approach to motion perception,” Biological Cybernetics, vol. 60, pp. 79-87, 1988.
[36] M. Tistarelli and G. Sandini,“Estimation of depth from motion using an anthropomorphic visual sensor,” Image and Vision Computing, vol. 8, no. 4, pp. 271-278, 1990.
[37] M. Tistarelli,“Multiple constraints for optical flow,” Proc. Third European Conf. Computer Vision, pp. 61-70,Stockholm, Sweden, May 1994.
[38] B.K.P. Horn, “Relative Orientation,” Int'l J. Computer Vision, vol. 4, pp. 59-78, 1990.
[39] B. Kamgar-Parsi,“Practical computation of pan and tilt angles in stereo,” Tech. Rep. CS-TR-1640, Univ. of Maryland, College Park, Md., Mar. 1986.
[40] B. Sabata and J.K. Aggarwal,“Estimation of motion from a pair of range images: A review,” CVGIP, special issue on Image Understanding, vol. 54, no. 3, pp. 309-324, Nov. 1991.

Index Terms:
Active vision, dynamic vision, time-to-impact, stereo vision, motion analysis, navigation.
Enrico Grosso, Massimo Tistarelli, "Active/Dynamic Stereo Vision," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 17, no. 9, pp. 868-879, Sept. 1995, doi:10.1109/34.406652
Usage of this product signifies your acceptance of the Terms of Use.