This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Vision for Mobile Robot Navigation: A Survey
February 2002 (vol. 24 no. 2)
pp. 237-267

Abstract—This paper surveys the developments of the last 20 years in the area of vision for mobile robot navigation. Two major components of the paper deal with indoor navigation and outdoor navigation. For each component, we have further subdivided our treatment of the subject on the basis of structured and unstructured environments. For indoor robots in structured environments, we have dealt separately with the cases of geometrical and topological models of space. For unstructured environments, we have discussed the cases of navigation using optical flows, using methods from the appearance-based paradigm, and by recognition of specific objects in the environment.

[1] K.M. Andress and A.C. Kak, “Evidence Accumulation and Flow of Control in a Hierarchical Spatial Reasoning System,” AI Magazine, vol. 9, no. 2, pp. 75-95, 1988.
[2] R.C. Arkin, “Motor Schema-Based Mobile Robot Navigation: An Approach to Programming by Behavior,” Proc. 1987 IEEE Int'l Conf. Robotics and Automation, pp. 264-271, 1987.
[3] R.C. Arkin, “Motor Schema-Based Mobile Robot Navigation,” Int'l J. Robotics Research, vol. 8, no. 4, pp. 92-112, 1989.
[4] R. C. Arkin, “Temporal Coordination of Perceptual Algorithms for Mobile Robot Navigation,” IEEE Trans. Robotics and Automation, vol. 10, no. 3, pp. 276-286, June 1994.
[5] S. Atiya and G.D. Hager, “Real-Time Vision-Based Robot Localization,” IEEE Trans. Robotics and Automation, vol. 9, no. 6, pp. 785-800, Dec. 1993.
[6] N. Ayache, Artificial Vision for Mobile Robots—Stereo-Vision and Multisensory Perception. MIT Press, 1991.
[7] N. Ayache and O. Faugeras, “Maintaining Representations of the Environment of a Mobile Robot,” IEEE Trans. Robotics and Automation, vol. 5, no. 6, pp. 804-819, 1989.
[8] P. Backes, K. Tso, and K. Tharp, “Mars Pathfinder Mission Internet-Based Operations Using WITS,” Proc. 1998 IEEE Int'l Conf. Robotics and Automation, vol. 1, pp. 284-291, May 1998.
[9] C. Balkenius, “Spatial Learning with Perceptually Grounded Representations,” Robotics and Autonomous Systems, vol. 5, no. 3-4, pp. 165-175, Nov. 1998.
[10] A. Bernardino and J. Santos-Victor, “Visual Behaviours for Binocular Tracking,” Robotics and Autonomous Systems, vol. 25, no. 3-4, pp 137-146, Nov. 1998.
[11] Active Vision, A. Blake and A.Yuille, eds. MIT Press, 1992.
[12] L. Boissier, B. Hotz, C. Proy, O. Faugeras, and P. Fua, “Autonomous Planetary Rover: On-Board Perception System Concept and Stereovision by Correlation Approach,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 181-186, May 1992.
[13] Navigating Mobile Robots: Systems and Techniques, J. Borenstein, H.R. Everett, and L. Feng, eds. Wellesley, Mass.: AK Peters, 1996.
[14] J. Borenstein and Y. Koren, "Real-Time Obstacle Avoidance for Fast Mobile Robots," IEEE Trans. on Systems, Man, and Cybernetics, Vol. 19, No. 5, 1989, pp. 1,179-1,187.
[15] J. Borenstein and Y. Koren, “Real-Time Obstacle Avoidance for Fast Mobile Robots in Cluttered Environments,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 572-577, 1990.
[16] J. Borenstein and Y. Koren, "The Vector Field Histogram: Fast Obstacle Avoidance for Mobile Robots," IEEE Trans. Robotics and Automation, Vol. 7, No. 3, June 1991, pp. 278-288.
[17] D.J. Braunegg, “Marvel: A System that Recognizes World Locations with Stereo Vision,” IEEE Trans. Robotics and Automation, vol. 9, no. 3, pp. 303-308, June 1993.
[18] R.A. Brooks, “Visual Map Making for a Mobile Robot,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 824-829, 1985.
[19] R. Brooks,“A robust layered control system for a mobile robot,” IEEE J. of Robotics and Automation, vol. 2, no. 1, pp. 14-23, 1986.
[20] R. Chatila and J.-P. Laumond, “Position Referencing and Consistent World Modeling for Mobile Robots,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 138-145, Mar. 1985.
[21] M. Chen, T.M. Jochem, and D.A. Pomerleau, “AURORA: A Vision-Based Roadway Departure Warning System,” Proc. IEEE Int'l Conf. Intelligent Robots and Systems, vol. 1, pp. 243-248, Aug. 1995.
[22] H. Choset, I. Konukseven, and A. Rizzi, “Sensor Based Planning: A Control Law for Generating the Generalized Voronoi Graph,” Proc. Eighth IEEE Int'l Conf. Advanced Robotics, pp. 333-8, 1997.
[23] E. Chown, S. Kaplan, and D. Kortenkamp, “Prototypes, Location, and Associative Networks (PLAN): Towards a Unified Theory of Cognitive Mapping,” Cognitive Science, vol. 19, no. 1, pp. 1-51, Jan. 1995.
[24] H.I. Christensen, N.O. Kirkeby, S. Kristensen, and L. Knudsen, “Model-Driven Vision for In-Door Navigation,” Robotics and Autonomous Systems, vol. 12, pp. 199-207, 1994.
[25] I.J. Cox, “Blanche: Position Estimation for an Sutonomous Robot Vehicle,” Proc. IEEE/RSJ Int'l Workshop Robots and Systems, pp. 432-439, 1989.
[26] I.J. Cox, “Blanche: An Experiment in Guidance and Navigation of an Autonomous Robot Vehicle,” IEEE Trans. Robotics and Automation, vol. 7, no. 2, pp. 193-204, Apr. 1991.
[27] I.J. Cox, “Modeling a Dynamic Environment Using a Bayesian Multiple Hypothesis Approach,” Artificial Intelligence, vol. 66, no. 2, pp. 311-344, Apr. 1994.
[28] Autonomous Robot Vehicles, I.J. Coxand G.T. Wilfong, eds. Springer-Verlag, 1990.
[29] F. Cozman and E. Krotkov, “Robot Localization Using a Computer Vision Sextant,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 106-111, May 1995.
[30] A.V. Goldberg and R.E. Tarjan, "A New Approach to the Maximum Flow Problem," Proc. 18th Ann. Symp. Theory of Computing, pp. 136-146, 1987.
[31] L. Delahoche, C. Pégard, B. Marhic, and P. Vasseur, “A Navigation System Based on an Omnidirectional Vision Sensor,” Proc. IEEE Int'l Conf. Intelligent Robots and Systems, pp. 718-724, Sept. 1997.
[32] F. Dellaert, D. Fox, W. Burgard, and S. Thrun, “Monte Carlo Localization for Mobile Robots,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 1322-28, May 1999.
[33] A. Dev, B. Kröse, and F. Groen, “Navigation of a Mobile Robot on the Temporal Development of the Optic Flow,” Proc. IEEE Int'l Conf. Intelligent Robots and Systems, pp. 558-563, Sept. 1997.
[34] E.D. Dickmanns, “Computer Vision and Highway Automation,” Vehicle System Dynamics, vol. 31, no. 5, pp. 325-343, 1999.
[35] E.D. Dickmans and B.D. Mysliwetz, "Recursive 3-D Road and Relative Ego-State Recognition," IEEE Trans. Pattern Analysis and Machine Intelligence, May 1992, pp. 199-213.
[36] E.D. Dickmanns and A. Zapp, “A Curvature-Based Scheme for Improving Road Vehicle Guidance by Computer Vision,” Proc. SPIE-The Int'l Soc. Optical Eng.-Mobile Robots, vol. 727, pp. 161-168, Oct. 1986.
[37] E.D. Dickmanns and A. Zapp, “Autonomous High Speed Road Vehicle Guidance by Computer Vision,” Proc. 10th World Congress on Automatic Control, vol. 4, 1987.
[38] R.T. Dunlay and D. G. Morgenthaler, “Obstacle Avoidance on Roadways Using Range Data,” Proc. SPIE-The Int'l Soc. Optical Eng.-Mobile Robots, vol. 727, pp. 110-116, Oct. 1986.
[39] S. Engelson and D. McDermott, “Error Correction in Mobile Robot Map Learning,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 2555-2560, 1992.
[40] O.D. Faugeras, Three-Dimensional Computer Vision: A Geometric Viewpoint.Cambridge, Mass.: MIT Press, 1993.
[41] J. Fernández and A. Casals, “Autonomous Navigation in Ill-Structured Outdoor Environments,” Proc. 1997 IEEE Int'l Conf. Intelligent Robots and Systems, pp. 395-400, Sept. 1997.
[42] J. Ferruz and A. Ollero, “Autonomous Mobile Robot Motion Control in Non-Structured Environments Based on Real-Time Video Processing,” Proc. IEEE Int'l Conf. Intelligent Robots and Systems, pp. 725-731, Sept. 1997.
[43] P. Gaussier, C. Joulain, S. Zrehen, and A. Revel, “Visual Navigation in an Open Environment without Map,” Proc. IEEE Int'l Conf. Intelligent Robots and Systems, pp. 545-550, Sept. 1997.
[44] G. Giralt, R. Sobek, and R. Chatila, “A Multi-Level Planning and Navigation System for a Mobile Robot; A First Approach to Hilare,” Proc. Sixth Int'l Joint Conf. Artificial Intelligence, vol. 1, pp. 335-337, 1979.
[45] Y. Goto and A. Stentz, “The CMU System for Mobile Robot Navigation,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 99-105, Mar/Apr. 1987.
[46] V. Graefe, “Dynamic Vision System for Autonomous Mobile Robots,” Proc. IEEE Int'l Conf. Intelligent Robots and Systems, pp. 12-23, 1989.
[47] V. Graefe, “Vision for Autonomous Mobile Robots,” Proc. IEEE Workshop Advanced Motion Control, pp. 57-64, 1992.
[48] V. Graefe, “Driverless Highway Vehicles,” Proc. Int'l Hi-Tech Forum, pp. 86-95, 1992.
[49] V. Graefe, “Vision for Intelligent Road Vehicles,” Proc. IEEE Symp. Intelligent Vehicles, pp. 135-140, 1993.
[50] S. Sull and N. Ahuja, "Estimation of Motion and Structure of Planar Surfaces From a Sequence of Monocular Images," Proc. Conf. Computer Vision and Pattern Recognition, pp. 732-733, 1991.
[51] Visual Servoing, K. Hashimoto, ed. World Scientific, 1993.
[52] M. Hashima, F. Hasegawa, S. Kanda, T. Maruyama, and T. Uchiyama, “Localization and Obstacle Detection for a Robot for Carrying Food Trays,” Proc. IEEE Int'l Conf. Intelligent Robots and Systems, pp. 345-351, Sept. 1997.
[53] J. Horn and G. Schmidt, “Continuous Localization for Long-Range Indoor Navigation of Mobile Robots,” Proc. IEEE Int'l Conf. Robotics and Automation, vol. 1, pp. 387-394, May 1995.
[54] J. Horn and G. Schmidt, “Continuous Localization of a Mobile Robot Based on 3D-Laser-Range-Data, Predicted Sensor Images, and Dead-Reckoning,” Robotics and Autonomous Systems, vol. 14, no. 2-3, pp. 99-118, May 1995.
[55] I. Horswill, “Polly: Vision-Based Artificial Agent,” Proc. Int'l Conf. AAAI, pp. 824-829, 1993.
[56] E. Huber and D. Kortenkamp, “Using Stereo Vision to Pursue Moving Agent with a Mobile Robot,” Proc. IEEE Int'l Conf. Robotics and Automation, vol. 3, pp. 2340-2346, May 1995.
[57] Y.K. Hwang and N. Ahuja, “Gross Motion Planning—A Survey,” ACM Computing Survey, vol. 24, no. 3, pp. 219-291, 1992.
[58] Y. K. Hwang and N. Ahuja, “A Potential Field Approach to Path Planning,” IEEE Trans. Robotics and Automation, vol. 8, no. 1, pp. 23-32, 1992.
[59] M. Isard and A. Blake, “Contour Tracking by Stochastic Propagation of Conditional Density,” Proc. European Conf. Computer Vision, pp. 343-356, 1996.
[60] M. Isard and A. Blake, “Condensation-Conditional Density Propagation for Visual Tracking,” Int'l J. Computer Vision, vol. 29, pp. 5-28, 1998.
[61] T.M. Jochem, “Using Virtual Active Vision Tools to Improve Autonomous Driving Tasks,” Technical Report CMU-RI-TR-94-39, Robotics Inst., Carnegie Mellon Univ., Oct. 1994.
[62] T.M. Jochem, D.A. Pomerleau, and C.E. Thorpe, “Vision Guided Lane Transition,” Proc. IEEE Symp. Intelligent Vehicles, Sept. 1995.
[63] T.M. Jochem, D.A. Pomerleau, and C.E. Thorpe, “Vision-Based Neural Network Road and Intersection Detection and Traversal,” Proc. IEEE Conf. Intelligent Robots and Systems, vol. 3, pp. 344-349, Aug. 1995.
[64] S.D. Jones, C. Andresen, and J.L. Crowley, “Appearance Based Processes for Visual Navigation,” Proc. IEEE Int'l Conf. Intelligent Robots and Systems, pp. 551-557, Sept. 1997.
[65] C. Joulian, P. Gaussier, A. Revel, and B. Gas, “Learning to Build Visual Categories from Perception-Action Associations,” Proc. IEEE Int'l Conf. Intelligent Robots and Systems, pp. 857-864, Sept. 1997.
[66] M.R. Kabuka and A.E. Arenas, “Position Verification of a Mobile Robot Using Standard Pattern,” IEEE J. Robotics and Automation, vol. 3, no. 6, pp. 505-516, Dec. 1987.
[67] L.P. Kaelbling, M.L. Littman, and A.R. Cassadra, “Planning and Acting in Partially Observable Stochastic Domains,” Artificial Intelligence, vol. 101, nos. 1-2, pp. 99-134, 1998.
[68] H. Kamada and M. Yoshida, “A Visual Control System Using Image Processing and Fuzzy Theory,” Vision-Based Vehicle Guidance, I. Masaki, ed. pp. 111-128, Springer Verlag, 1991.
[69] A.C. Kak, K.M. Andress, C. Lopez-Abadia, M. S. Carroll, and J.R. Lewis, “Hierarchical Evidence Accumulation in the PSEIKI System and Experiments in Model-Driven Mobile Robot Navigation,” Uncertainty in Artificial Intelligence, M. Henrion, R. Shachter, L.N. Kanal, and J. Lemmer, eds. pp. 353-369, Elsevier, 1990.
[70] A. Kelly and A. Stentz, “Minimum throughput Adaptive Perception for High Speed Mobility,” Proc. IEEE Int'l Conf. Intelligent Robots and Systems, pp. 215-223, Sept. 1997.
[71] D. Kim and R. Nevatia, “Representation and Computation of the Spatial Environment for Indoor Navigation,” Proc. Int'l Conf. Computer Vision and Pattern Recognition, pp. 475-482, 1994.
[72] D. Kim and R. Nevatia, “Symbolic Navigation with a Generic Map,” Proc. IEEE Workshop Vision for Robots, pp. 136-145, Aug. 1995.
[73] D. Kim and R. Nevatia, “Recognition and Localization of Generic Objects for Indoor Navigation Using Functionality,” Image and Vision Computing, vol. 16, no. 11, pp. 729-743, Aug. 1998.
[74] D. Kortenkamp and T. Weymouth, “Topolgical Mapping for Mobile Robots Using a Combination of Sonar and Vision Sensing,” Proc. 12th Nat'l Conf. Artificial Intelligence, vol. 2, pp. 979-984, 1994.
[75] A. Kosaka and A.C. Kak, “Fast Vision-Guided Mobile Robot Navigation Using Model-Based Reasoning and Prediction of Uncertainties,” Computer Vision, Graphics, and Image Processing—Image Understanding, vol. 56, no. 3, pp. 271-329, 1992.
[76] A. Kosaka, M. Meng, and A.C. Kak, “Vision-Guided Mobile Robot Navigation Using Retroactive Updating of Position Uncertainty,” Proc. IEEE Int'l Conf. Robotics and Automation, vol. 2, pp. 1-7, 1993.
[77] A. Kosaka and G. Nakazawa, “Vision-Based Motion Tracking Using Prediction of Uncertainties,” Proc. IEEE Int'l Conf. Robotics and Automation, vol. 3, pp. 2637-2644, May 1995.
[78] D.J. Kriegman and T.O. Binford, “Generic Models for Robot Navigation,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 746-751, 1988.
[79] D.J. Kriegman, E. Triendl, and T.O. Binford, “Stereo Vision and Navigation in Buildings for Mobile Robots,” IEEE Trans. Robotics and Automation, vol. 5, no. 6, pp. 792-803, 1989.
[80] E. Krotkov, “Mobile Robot Localization Using a Single Image,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 978-983, 1989.
[81] E. Krotkov and M. Hebert, “Mapping and Positioning for a Prototype Lunar Rover,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 2913-2919, May 1995.
[82] E. Krotkov and R. Hoffman, “Terrain Mapping for a Walking Planetary Rover,” IEEE Trans. Robotics and Automation, vol. 10, no. 6, pp. 728-739, Dec. 1994.
[83] D. Kuan, G. Phipps, and A. Hsueh, “A Real-Time Road Following Vision System,” Proc. SPIE Int'l Soc. Optical Eng.-Mobile Robots, vol. 727, pp. 152-160, Oct. 1986.
[84] D. Kuan and U.K. Sharma, “Model Based Geometric Reasoning for Autonomous Road Following,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 416-423, Mar/Apr. 1987.
[85] K. Kuhnert, “Towards the Objective Evaluation of Low-Level Vision Operators,” Proc. Sixth European Conf. Artificial Intelligence, pp. 657, 1984.
[86] K. Kuhnert, “A Vision System for Real Time Road and Object Recognition for Vehicle Guidance,” Proc. SPIE-The Int'l Soc. Optical Eng.-Mobile Robots, vol. 727, pp. 267-272, Oct. 1986.
[87] B. J. Kuipers and Y.-T. Byun, “A Robot Exploration and Mapping Strategy Based on a Semantic Hierarchy of Spatial Representations,” J. Robotics and Autonomous Systems, vol. 8, pp. 47-63, 1991.
[88] D.T. Lawton, T.S. Levitt, C. McConnell, and J. Glicksman, “Terrain Models for an Autonomous Land Vehicle,” Proc. 1986 IEEE Int'l Conf. Robotics and Automation, pp. 2043-2051, Apr. 1986.
[89] X. Lebègue and J.K. Aggarwal, “Significant Line Segments for an Indoor Mobile Robot,” IEEE Trans. Robotics and Automation, vol. 9, no. 6, pp. 801-815, Dec. 1993.
[90] J. Leonard and H. Durrant-Whyte, “Mobile Robot Localization by Tracking Geometric Beacons,” IEEE Trans. Robotics and Automation, vol. 7, no. 3, pp. 89-97, June 1991.
[91] L.M. Lorigo, R.A. Brooks, and W.E. Grimson, “Visually-Guided Obstacle Avoidance in Unstructured Environments,” Proc. IEEE Int'l Conf. Intelligent Robots and Systems, pp. 373-379, Sept. 1997.
[92] T. Lozano-Perez and M. Wesley, "An Algorithm for Planning Collision-Free Paths among Polyhedral Obstacles," Comm. ACM, vol. 22, Oct. 1979, pp. 560-570.
[93] M.J. Magee and J.K. Aggarwal, “Determining the Position of a Robot Using a Single CalibrationObject,“ Proc. Int'l Conf. Robotics, pp. 140-149, 1984.
[94] A. Martínez and J. Vitrià, “Clustering in Image Space for Place Recognition and Visual Annotations for Human-Robot Interaction,” IEEE Trans. Systems, Man, and Cybernetics B, vol. 31, no. 5, Oct. 2001.
[95] Vision-based Vehicle Guidance, I. Masaki, ed. Springer-Verlag, 1991.
[96] Proc. Int'l Symp.”Intelligent Vehicles.” I. Masaki, ed., yearly since 1992.
[97] M.J. Mataric, “A Distributed Model for Mobile Robot Environment Learning and Navigation,” Technical Report AITR-1228, Massachusetts Inst. of Technology AI Lab, 1990.
[98] Y. Matsumoto, M. Inaba, and H. Inoue, “Visual Navigation Using View-Sequenced Route Representation,” Proc. IEEE Int'l Conf. Robotics and Automation, vol. 1, pp. 83-88, Apr. 1996.
[99] L. Matthies, E. Gat, R. Harrison, B. Wilcox, R. Volpe, and T. Litwin, “Mars Microrover Navigation: Performance Evaluation and Enhancement,” Proc. IEEE Int'l Conf. Intelligent Robots and Systems, vol. 1, pp. 433-440, Aug. 1995.
[100] L. Matthies and S.A. Shafer, "Error Modeling in Stereo Navigation," IEEE J. Robotics and Automation, vol. 3, no. 3, pp. 239-248, June 1987.
[101] L. Matthies, T. Balch, and B. Wilcox, “Fast Optical Hazard Detection for Planetary Rovers Using Multiple Spot Laser Triangulation,” Proc. IEEE Int'l Conf. Robotics and Automation, vol. 1 pp. 859-866, Apr. 1997.
[102] M. Meng and A.C. Kak, “NEURO-NAV: A Neural Network Based Architecture for Vision-Guided Mobile Robot Navigation Using Non-Metrical Models of the Environment,” Proc. IEEE Int'l Conf. Robotics and Automation, vol. 2, pp. 750-757, 1993.
[103] M. Meng and A.C. Kak, “Mobile Robot Navigation Using Neural Networks and Nonmetrical Environment Models,” IEEE Control Systems, pp. 30-39, Oct. 1993.
[104] J. Miura and Y. Shirai, “Hierachical Vision-Motion Planning with Uncertainty: Local Path Planning and Global Route Selection,” Proc. IEEE Int'l Conf. Intelligent Robots and Systems, July 1992.
[105] J. Miura and Y. Shirai, “Vision-Motion Planning with Uncertainty,” Proc. IEEE Int'l Conf. Robotics and Automation, vol. 2, pp. 1772-1777, May 1992.
[106] J. Miura and Y. Shirai, “An Uncertainty Model of Stereo Vision and Its Application to Vision-Motion Planning of Robots,” Proc. 13th Int'l Joint Conf. Artificial Intelligence, Aug. 1993.
[107] H.P. Moravec, “Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rover,” PhD thesis, Stanford Univ., Sept. 1980. (published as Robot Rover Visual Navigation. Ann Arbor, MI: UMI Research Press, 1981.)
[108] H.P. Moravec, “The Stanford Cart and the CMU Rover,” Proc. IEEE, vol. 71, no. 7, pp. 872-884, July 1983.
[109] H.P. Moravec and A. Elfes, “High Resolution Maps from Wide Angle Sonar,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 116-121, 1985.
[110] R. Koch, M. Pollefeys, L. Van Gool, “Multi Viewpoint Stereo from Uncalibrated Video Sequences,” Proc. Fifth European Conf. Computer Vision, H. Burkhardt and B. Neumann, eds., pp. 55-71, June 1998.
[111] T. Nakamura and M. Asada, “Motion Sketch: Acquisition of Visual Motion Guided Behaviors,” Proc. 14th Int'l Joint Conf. Artificial Intelligence, vol. 1, pp. 126-132, Aug. 1995.
[112] T. Nakamura and M. Asada, “Stereo Sketch: Stereo Vision-Based Target Reaching Behavior Acquisition with Occlusion Detection and Avoidance,” Proc. IEEE Int'l Conf. Robotics and Automation, vol. 2, pp. 1314-1319, Apr. 1996.
[113] S. Negahdaripour, B. Hayashi, and Y. Aloimonos, “Direct Motion Stereo for Passive Navigation,” IEEE Trans. Robotics and Automation, vol. 11, no. 6, pp. 829-843, Dec. 1995.
[114] N.J. Nilsson, “Shakey the Robot,” Technical Report 323, SRI Int'l, Apr. 1984.
[115] T. Ohno, A. Ohya, and S. Yuta, “Autonomous Navigation for Mobile Robots Referring Pre-Recorded Image Sequence,” Proc. IEEE Int'l Conf. Intelligent Robots and Systems, vol. 2, pp. 672-679, Nov. 1996.
[116] A. Ohya, A. Kosaka, and A. Kak, “Vision-Based Navigation of Mobile Robot with Obstacle Avoidance by Single Camera Vision and Ultrasonic Sensing,” Proc. IEEE Int'l Conf. Intelligent Robots and Systems, pp. 704-711, Sept. 1997.
[117] A. Ohya, A. Kosaka, and A. Kak, “Vision-Based Navigation by Mobile Robots with Obstacle Avoidance Using Single-Camera Vision and Ultrasonic Sensing,” IEEE Trans. Robotics and Automation, vol. 14, no. 6, pp. 969-978, Dec. 1998.
[118] C. Olson and L. Matthies, “Maximum Likelihood Rover Localization by Matching Range Maps,” Proc. IEEE Int'l Conf. Robotics and Automation, vol. 1 pp. 272-277, May 1998.
[119] G. Oriolo, G. Ulivi, and M. Vendittelli, “On-Line Map Building and Navigation for Autonomous Mobile Robots,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 2900-2906, May 1995.
[120] R. Pagnot and P. Grandjean, “Fast Cross Country Navigation on Fair Terrains,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 2593-2598, May 1995.
[121] J. Pan, D.J. Pack, A. Kosaka, and A.C. Kak, “FUZZY-NAV: A Vision-Based Robot Navigation Architecture Using Fuzzy Inference for Uncertainty-Reasoning,” Proc. IEEE World Congress Neural Networks, vol. 2, pp. 602-607, July 1995.
[122] J. Pan, G.N. DeSouza, and A.C. Kak, “Fuzzy Shell: A Large-Scale Expert System Shell Using Fuzzy Logic for Uncertainty Reasoning,” IEEE Trans. Fuzzy Systems, vol. 6, no. 4, pp. 563-581, Nov. 1998.
[123] D. Pierce and B. Kuipers, “Learning to Explore and Build Maps,” Proc. 12th Nat'l Conf. Artificial Intelligence, vol. 2, pp. 1264-1271, 1994.
[124] D.A. Pomerleau, “ALVINN: An Autonomous Land Vehicle in a Neural Network,” Technical Report CMU-CS-89-107, Carnegie Mellon Univ., 1989.
[125] D.A. Pomerleau, “Efficient Training of Arificial Neural Networks for Autonomous Navigation,” Neural Computation, vol. 3, pp. 88-97, 1991.
[126] D.A. Pomerleau, “Reliability Estimation for Neural Network Based Autonomous Driving,” Robotics and Autonomous Systems, vol. 12, pp. 113-119, 1994.
[127] D.A. Pomerleau, “Neural Network Vision for Robot Driving,” The Handbook of Brain Theory and Neural Networks, M. Arbib, ed. 1995.
[128] D. Pomerleau and T. Jochem, "Rapidly Adapting Machine Vision for Automated Vehicle Steering," IEEE Expert, Apr. 1996, pp. 19-27.
[129] U. Regensburger and V. Graefe, “Visual Recognition of Obstacles on Roads,” Proc. IEEE Int'l Conf. Intelligent Robots and Systems, pp. 980-987, 1994.
[130] P. Rives and J. Borrelly, “Underwater Pipe Inspection Task Using Visual Servoing Techniques,” Proc. IEEE Int'l Conf. Intelligent Robots and Systems, pp. 63-68, Sept. 1997.
[131] A. Rizzi, G. Bianco, and R. Cassinis, “A Bee-Inpired Visual Homing Using Color Images,” Robotics and Autonomous Systems, vol. 25, no. 3-4, pp. 159-164, Nov. 1998.
[132] N. Sawasaki, T. Morita, and T. Uchiyama, “Design and Implementation of High-Speed Visual Tracking System for Real-Time Motion Analysis,” Proc. Int'l Conf. Pattern Recognition, pp. 478-484, 1996.
[133] J. Santos-Victor, G. Sandini, F. Curotto, and S. Garibaldi, “Divergent Stereo for Robot Navigation: Learning from Bees,” Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, 1993.
[134] J. Santos-Victor, G. Sandini, F. Curotto, and S. Garibaldi, “Divergent Stereo in Autonomous Navigation: from Bees to Robots,” Int'l J. Computer Vision, vol. 14, no. 2, pp. 159-177, Mar. 1995.
[135] R. Schuster, N. Ansari, and A. Bani-Hashemi, “Steering a Robot with Vanishing Points,” IEEE Trans. Robotics and Automation, vol. 9, no. 4, pp. 491-498, Aug. 1993.
[136] R. Simmons, E. Krotkov, L. Chrisman, F. Cozman, R. Goodwin, M. Hebert, L. Katragadda, S. Koenig, G. Krishnaswamy, Y. Shinoda, W. Whittaker, and P. Klader, “Experience with Rover Navigation for Lunar-Like Terrains,” Proc. 1995 IEEE Int'l Conf. Intelligent Robots and Systems, pp. 441-446, Aug. 1995.
[137] R. Simmons and S. Koenig, “Probabilistic Robot Navigation in Partially Observable Enironments,” Proc. Int'l Joint Conf. Artificial Intelligence, pp. 1080-1087, Aug. 1995.
[138] K. Sugihara, “Some Location Problems for Robot Navigation Using a Single Camera,” Computer Vision, Graphics, and Image Processing, vol. 42, pp. 112-129, 1988.
[139] K. Sutherland and W. Thompson, “Localizing in Unstructured Environments: Dealing with the Errors,” IEEE Trans. Robotics and Automation, vol. 10, no. 6, pp. 740-754, Dec. 1994.
[140] C. Thorpe, “An Analysis of Interest Operators for FIDO,” Proc. IEEE Workshop Computer Vision: Representation and Control, pp. 135-140, Apr./May 1984.
[141] C. Thorpe, “FIDO: Vision and Navigation for a Mobile Robot,” PhD dissertation, Dept. Computer Science, Carnegie Mellon Univ., Dec. 1984.
[142] C. Thorpe, T. Kanade, and S.A. Shafer, “Vision and Navigation for the Carnegie-Mellon Navlab,” Proc. Image Understand Workshop, pp. 143-152, 1987.
[143] C. Thorpe,M. H. Herbert,T. Kanade, and S. A. Shafer,“Vision and Navigation for the Carnegie-Mellon Navlab”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 10, no. 3, pp. 362-373, May 1988.
[144] S. Thrun, “Learning Metric-Topological Maps for Indoor Mobile Robot Navigation,” Artificial Intelligence, vol. 99, no. 1, pp. 21-71, Feb. 1998.
[145] S. Thrun, “Probabilistic Algorithms in Robotics,” Technical Report CMU-CS-00-126, Carnegie Mellon Univ., 2000.
[146] D.C. Tseng and C.H. Chang, “Color Segmentation Using Perceptual Attributes,” Proc. 11th Conf. Pattern Recognition, vol. 3, pp. 228-231, 1992.
[147] T. Tsubouchi and S. Yuta, “Map-Assisted Vision System of Mobile Robots for Reckoning in a Building Environment,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 1978-1984, 1987.
[148] S. Tsugawa, T. Yatabe, T. Hirose, and S. Matsumoto, “An Automobile with Artificial Intelligence,” Proc. Sixth Int'l Joint Conf. Artificial Intelligence, pp. 893-895, 1979.
[149] T. Tsumura, “Survey of Automated Guided Vehicle in Japanese Factory,” Proc. Int'l Conf. Robotics and Automation, pp. 1329-1334, Apr. 1986.
[150] M.A. Turk and M. Marra, “Color Road Segmentation and Video Obstacle Detection,” Proc. SPIE-The Int'l Soc. Optical Eng.-Mobile Robots, vol. 727, pp. 136-142, Oct. 1986.
[151] M. A. Turk,D. G. Morgenthaler,K. D. Gremban, and M. Marra,“VITS-A Vision System for Autonomous Land Vehicle Navigation”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 10, no. 3, pp. 342-361, May 1988.
[152] R. Wallace, K. Matsuzaki, Y. Goto, J. Crisman, J. Webb, and T. Kanade, “Progress in Robot Road-Following,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 1615-1621, Apr. 1986.
[153] R.S. Wallace, “Robot Road Following by Adaptive Color Classification and Shape Tracking,” Proc. IEEE Int'l Conf. Robotics and Automation pp. 258-263, Mar/Apr. 1987.
[154] A.M. Waxman, J.J. LeMoigne, L.S. Davis, and B. Srinivasan, “Visual Navigation of Roadways,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 862-867, Mar. 1985.
[155] A.M. Waxman, J.J. LeMoigne, L.S. Davis, B. Srinivasan, T.R. Kushner, E. Liang, and T. Siddalingaiah, “A Visual Navigation System for Autonomous Land Vehicles,” IEEE Trans. Robotics and Automation, vol. 3, no. 2, pp. 124-141, Apr. 1987.
[156] J. Weng and S. Chen, "Vision-Guided Navigation Using Shoslif," Neural Networks, Vol. 11, Nos. 7-8, Oct./Nov. 1998, pp. 1511-1529.
[157] B. Wilcox and D. Gennery, “A Mars Rover for the 1990's,” J. Brit. Interplanetary Soc., vol. 40, pp. 484-488, 1987.
[158] B. Wilcox, L. Matthies, D. Gennery, B. Copper, T. Nguyen, T. Litwin, A. Mishkin, and H. Stone, “Robotic Vehicles for Planetary Exploration,” Proc. 1992 IEEE Int'l Conf. Robotics and Automations, pp. 175-180, May 1992.
[159] Y. Yagi, S. Kawato, and S. Tsuji, “Real-Time Omnidirectional Image Sensor (copis) for Vision-Guided Navigation,” IEEE J. Robotics and Automation, vol. 10, no. 1, pp. 11-21, Feb. 1994.
[160] Y. Yagi, Y. Nishizawa, and M. Yachida, “Map-Based Navigation for a Mobile Robot with Omnidirectional Image Sensor COPIS,” IEEE Trans. Robotics and Automation, vol. 11, no. 5, pp. 634-648, Oct. 1995.
[161] B. Yamauchi and R. Beer, “Spatial Learning for Navigation in Dynamic Environments,” IEEE Trans. System, Man, and Cybernetics, Part B, vol. 26, no. 3, pp. 496-505, June 1996.
[162] Z. Zhang and O. Faugeras, “A 3D World Model Builder with a Mobile Robot,” Int'l J. Robotics Research, vol. 11, no. 4, pp. 269-285, 1992.
[163] J.Y. Zheng, M. Barth, and S. Tsuji, “Autonomous Landmark Selection for Route Recognition by a Mobile Robot,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 2004-2009, 1991.
[164] U.R. Zimmer, “Robust World-Modeling and Navigation in a Real World,” Proc. Third Int'l Conf. Fuzzy Logic, Neural Nets and Soft Computing, vol. 13, no. 2-4, pp. 247-60, Oct. 1996.
[165] P. Zingaretti and A. Carbonaro, “Route Following Based on Adaptive Visual Landmark Matching,” Robotics and Autonomous Systems, vol. 25, no. 3-4, pp. 177-184, Nov. 1998.

Index Terms:
Mobile robotics, navigation, computer vision, indoor navigation, outdoor navigation.
Citation:
Guilherme N. DeSouza, Avinash C. Kak, "Vision for Mobile Robot Navigation: A Survey," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 2, pp. 237-267, Feb. 2002, doi:10.1109/34.982903
Usage of this product signifies your acceptance of the Terms of Use.