Issue No. 12 - December (2000 vol. 22)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/34.895979
<p><b>Abstract</b>—We describe an automated scene modeling system that consists of two components operating in an interleaved fashion: an incremental modeler that builds solid models from range imagery and a sensor planner that analyzes the resulting model and computes the next sensor position. This planning component is target-driven and computes sensor positions using model information about the imaged surfaces and the unexplored space in a scene. The method is shape-independent and uses a continuous-space representation that preserves the accuracy of sensed data. It is able to completely acquire a scene by repeatedly planning sensor positions, utilizing a partial model to determine volumes of visibility for contiguous areas of unexplored scene. These visibility volumes are combined with sensor placement constraints to compute sets of occlusion-free sensor positions that are guaranteed to improve the quality of the model. We show results for the acquisition of a scene that includes multiple, distinct objects with high occlusion.</p>
3D scene reconstruction, model acquisition, sensor planning, active vision.
Michael K. Reed, Peter K. Allen, "Constraint-Based Sensor Planning for Scene Modeling", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 22, no. , pp. 1460-1467, December 2000, doi:10.1109/34.895979