This Article 
 Bibliographic References 
 Add to: 
Simultaneous Localization and Map-Building Using Active Vision
July 2002 (vol. 24 no. 7)
pp. 865-880

An active approach to sensing can provide the focused measurement capability over a wide field of view which allows correctly formulated Simultaneous Localization and Map-Building (SLAM) to be implemented with vision, permitting repeatable long-term localization using only naturally occurring, automatically-detected features. In this paper, we present the first example of a general system for autonomous localization using active vision, enabled here by a high-performance stereo head, addressing such issues as uncertainty-based measurement selection, automatic map-maintenance, and goal-directed steering. We present varied real-time experiments in a complex environment.

[1] R. Smith, M. Self, and P. Cheeseman, “A Stochastic Map for Uncertain Spatial Relationships,” Proc. Fourth Int'l Symp. Robotics Research, 1987.
[2] C.G. Harris and J.M. Pike, “3D Positional Integration from Image Sequences,” Proc. Third Alvey Vision Conf., pp. 233–236, 1987.
[3] N. Ayache, Artificial Vision for Mobile Robots—Stereo-Vision and Multisensory Perception. MIT Press, 1991.
[4] H. F. Durrant-Whyte, “Where am I? A Tutorial on Mobile Vehicle Localization,” Industrial Robot, vol. 21, no. 2, pp. 11–16, 1994.
[5] C.G. Harris, “Geometry from Visual Motion,” Active Vision, A. Blake and A. Yuille, eds., 1992.
[6] P. Beardsley, I. Reid, A. Zisserman, and D. Murray, “Active Visual Navigation Using Non-Metric Structure,” Proc. Fifth Int'l Conf. Computer Vision, 1995.
[7] J.-Y. Bouget and P. Perona, “Visual Navigation Using a Single Camera,” ICCV5, pp. 645–652, 1995.
[8] M. Pollefeys, R. Koch, and L. Van Gool, “Self-Calibration and Metric Reconstruction in Spite of Varying and Unknown Internal Camera Parameters,” Proc. Int'l Conf. Computer Vision, pp. 90-95, Jan. 1998.
[9] P.H.S. Torr, A.W. Fitzgibbon, and A. Zisserman, “Maintaining Multiple Motion Model Hypotheses over Many Views to Recover Matching and Structure,” Proc. Sixth Int'l Conf. Computer Vision, pp. 485–491, 1998.
[10] H.F. Durrant-Whyte, M.W.M. G. Dissanayake, and P.W. Gibbens, “Toward Deployments of Large Scale Simultaneous Localization and Map Building (SLAM) Systems,” Proc. Ninth Int'l Symp. Robotics Research, pp. 121–127, 1999.
[11] K.S. Chong and L. Kleeman, “Feature-Based Mapping in Real, Large Scale Environments Using an Ultrasonic Array,” Int'l J. Robotics Research, vol. 18, no. 2, pp. 3–19, Jan. 1999.
[12] S. Thrun, D. Fox, and W. Burgard, "A Probabilistic Approach to Concurrent Mapping and Localization for Mobile Robots," Machine Learning and Autonomous Robots Vol. 31, 1998, pp. 29-53, and Autonomous Robots, Vol. 5, 1998, pp. 253-271 (joint issues).
[13] J.A. Castellanos, “Mobile Robot Localization and Map Building: A Multisensor Fusion Approach,” PhD thesis, Universidad de Zaragoza, Spain, 1998.
[14] J.J. Leonard and H.J.S. Feder, “A Computationally Efficient Method for Large-Scale Concurrent Mapping and Localization,” Robotics Research, Springer Verlag, 2000.
[15] A.J. Davison and D.W. Murray, “Mobile Robot Localization Using Active Vision,” Proc. Fifth European Conf. Computer Vision, pp. 809–825, 1998.
[16] S.K. Nayar, “Catadioptric Omnidirectional Cameras,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 482-488, June 1997.
[17] A.J. Davison and N. Kita, “Active Visual Localization for Cooperating Inspection Robots,” Proc. IEEE/RSJ Conf. Intelligent Robots and Systems, 2000.
[18] J.G.H. Knight, A.J. Davison, and I.D. Reid, “Constant Time SLAM Using Postponement,” Proc. IEEE/RSJ Conf. Intelligent Robots and Systems, 2001.
[19] A.J. Davison, “Mobile Robot Navigation Using Active Vision,” PhD thesis, Univ. of Oxford, available at, 1998.
[20] A.J. Davison and N. Kita, “Sequential Localization and Map-Building for Real-Time Computer Vision and Robotics,” Robotics and Autonomous Systems, vol. 36, no. 4, pp. 171–183, 2001.
[21] J. MacCormick and M. Isard, “Partitioned Sampling, Articulated Objects and Interface-Quality Hand Tracking,” Proc. Sixth European Conf. Computer Vision, 2000.
[22] S. Thrun, W. Burgard, and D. Fox, “A Real-Time Algorithm for Mobile Robot Mapping with Applications to Multi-Robot and 3D Mapping,” Proc. IEEE Int'l Conf. Robotics and Automation, 2000.
[23] C.G. Harris and M. Stephens, “A Combined Corner and Edge Detector,” Proc. Fourth Alvey Vision Conf., pp. 147–151, 1988.
[24] J. Shi and C. Tomasi, Good Features to Track Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 593-600, 1994.
[25] P. Whaite and F.P. Ferrie, "Autonomous Exploration: Driven by Uncertainty," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 3, pp. 193-205, Mar. 1997.
[26] D.W. Murray, I.D. Reid, and A.J. Davison, “Steering without Representation with the Use of Active Fixation,” Perception, vol. 26, pp. 1519–1528, 1997.
[27] M.F. Land and D.N. Lee, “Where We Look When We Steer,” Nature, vol. 369, pp. 742–744, 1994.
[28] G. Sandini and M. Tistarelli, “Robust Obstacle Detection Using Optical Flow,” Proc. IEEE Int'l Workshop Robust Computer Vision, 1990.
[29] M. Tistarelli and G. Sandini,“Dynamic aspects in active vision,” CVGIP: Image Understanding, vol. 56, no. 1, pp. 108-192, 1992.
[30] E. Grossi and M. Tistarelli, “Active/Dynamic Stereo Vision,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 17, no. 11, pp. 1117–1128, 1995.
[31] A. Chiuso, P. Favaro, H. Jin, and S. Soatto, “‘MFm’: 3-D Motion from 2-D Motion Causally Integrated over Time,” Proc. Sixth European Conf. Computer Vision, 2000.

Index Terms:
Active vision, simultaneous localization and map-building, mobile robots.
Andrew J. Davison, David W. Murray, "Simultaneous Localization and Map-Building Using Active Vision," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 865-880, July 2002, doi:10.1109/TPAMI.2002.1017615
Usage of this product signifies your acceptance of the Terms of Use.