The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.12 - December (2003 vol.25)
pp: 1597-1608
ABSTRACT
<p><b>Abstract</b>—This paper explores the combination of inertial sensor data with vision. Visual and inertial sensing are two sensory modalities that can be explored to give robust solutions on image segmentation and recovery of 3D structure from images, increasing the capabilities of autonomous robots and enlarging the application potential of vision systems. In biological systems, the information provided by the vestibular system is fused at a very early processing stage with vision, playing a key role on the execution of visual movements such as gaze holding and tracking, and the visual cues aid the spatial orientation and body equilibrium. In this paper, we set a framework for using inertial sensor data in vision systems, and describe some results obtained. The unit sphere projection camera model is used, providing a simple model for inertial data integration. Using the vertical reference provided by the inertial sensors, the image horizon line can be determined. Using just one vanishing point and the vertical, we can recover the camera's focal distance and provide an external bearing for the system's navigation frame of reference. Knowing the geometry of a stereo rig and its pose from the inertial sensors, the collineation of level planes can be recovered, providing enough restrictions to segment and reconstruct vertical features and leveled planar patches.</p>
INDEX TERMS
Image processing and computer vision, edge and feature detection, sensor fusion.
CITATION
Jorge Lobo, Jorge Dias, "Vision and Inertial Sensor Cooperation Using Gravity as a Vertical Reference", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.25, no. 12, pp. 1597-1608, December 2003, doi:10.1109/TPAMI.2003.1251152
22 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool