The Community for Technology Leaders
2009 International Conference on CyberWorlds (2009)
Bradford, West Yorkshire, UK
Sept. 7, 2009 to Sept. 11, 2009
ISBN: 978-0-7695-3791-7
pp: 211-218
ABSTRACT
Visual feedback is one of the most adopted solutions for driving the navigation of autonomous robots in unknown environments. This paper presents the structure of a visual interaction system suitable for real-time robotics applications. By means of a specific modeling, the visual system allows a team of mobile robots to perform any relevant visual task in a timely fashion. As a matter of fact, the guarantee of real-time constraints for the processing tasks related with the visual feedback is crucial to achieve an accurate and robust control of mobile robots. The proposed visual infrastructure is based on a single camera, which provides a global view of the robot's workspace. A degenerated camera model is developed to allow a planar motion in R3 . The model simplifies the visual system calibration, while reducing the cost of coordinates transforms between the real-world and the image space during the system operation. To show the behaviour and to derive the performances of the visual interaction system, experimental results are carried out considering the real-time navigation of autonomous mobile robots.
INDEX TERMS
cameras, mobile robots, multi-robot systems, navigation, path planning, real-time systems, robot vision
CITATION

M. L. Della Vedova, T. Facchinetti, A. Ferrara and A. Martinelli, "Visual Interaction for Real-Time Navigation of Autonomous Mobile Robots," 2009 International Conference on CyberWorlds(CW), Bradford, West Yorkshire, UK, 2018, pp. 211-218.
doi:10.1109/CW.2009.24
93 ms
(Ver 3.3 (11022016))