This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
A Solution to the Next Best View Problem for Automated Surface Acquisition
October 1999 (vol. 21 no. 10)
pp. 1016-1030

Abstract—A solution to the “next best view” (NBV) problem for automated surface acquisition is presented. The NBV problem is to determine which areas of a scanner's viewing volume need to be scanned to sample all of the visible surfaces of an a priori unknown object and where to position/control the scanner to sample them. It is argued that solutions to the NBV problem are constrained by the other steps in a surface acquisition system and by the range scanner's particular sampling physics. A method for determining the unscanned areas of the viewing volume is presented. In addition, a novel representation, positional space (PS), is presented which facilitates a solution to the NBV problem by representing what must be and what can be scanned in a single data structure. The number of costly computations needed to determine if an area of the viewing volume would be occluded from some scanning position is decoupled from the number of positions considered for the NBV, thus reducing the computational cost of choosing one. An automated surface acquisition systems designed to scan all visible surfaces of an a priori unknown object is demonstrated on real objects.

[1] R. Pito, “Automated Surface Acquisition Using Range Cameras,” PhD thesis, Univ. of Pennsylvania, Dept. of Computer Science, Philadelphia, 1997.
[2] J. Aloimonos, I. Weiss, and A. Bandopadhay, “Active Vision,” Proc. First Int'l Conf. Computer Vision, pp. 35-54, London, June 1987.
[3] R. Bajcsy, “Active Perception,” Proc. IEEE, vol. 76, pp. 996-1,005, Aug. 1988.
[4] J. Kahn, M. Klawe, and D. Kleitman, “Traditional Galleries Require Fewer Watchmen,” Technical Report IBM Research Report RJ3021, 1980.
[5] S. Xie, T. Calvert, and B. Bhattacharya, “Planning Views for the Incremental Construction of Body Models,” Proc. Int'l Conf. Pattern Recognition, pp. 154-157, 1986.
[6] J. Miura and K. Ikeuchi, “Task-Oriented Generation of Visual Sensing Strategies,” Proc. Int'l Conf. Computer Vision, pp. 1,106-1,113, 1995.
[7] K. Kemmotsu and T. Kanade, “Sensor Placement Design for Object Pose Determination with Three Light-Stripe Range Finders,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 1,357-1,364, San Diego, Calif., 1994.
[8] S. Hutchinson and A. Kak, “Planning Sensing Strategies in a Robot Work Cell with Multi-Sensor Capabilities,” IEEE Trans. Robotics and Automation, vol. 5, pp. 765-783, Dec. 1989.
[9] D. Wilkes and J.K. Tsotsos, "Active Object Recognition," Proc. IEEE Conf. Computer Vision and Pattern Recognition, June 1992, pp. 136-141.
[10] L.E. Wixson and D.H. Ballard, “Using Intermediate Objects to Improve the Efficiency of Visual Search,” Int'l J. Computer Vision, vol. 12, 1994.
[11] A. Johnson, R. Hoffman, J. Osborn, and M. Herbert, “A System for Semi-Automatic Modeling of Complex Environments,” Proc. Int'l Conf. Recent Advances in 3-D Digital Imaging and Modeling, Ottawa, Ontario, Canada, May 1997.
[12] G. Tarbox and S. Gottschlich, “Planning for Complete Sensor Coverage in Inspection,” Computer Vision and Image Understanding, vol. 61, pp. 84-111, Jan. 1995.
[13] G. Tarbox and S. Gottschlich, “IVS: An Integrated Volumetric Inspection System,” Computer Vision and Image Understanding, vol. 61, pp. 430-444, May 1995.
[14] C.K. Cowan and P.D. Kovesi, "Automatic sensor placement from vision task requirements," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 10, no. 3, pp. 407-416, May 1988.
[15] A. Marshall and D. Roberts, “Automatically Planning the Inspection of Three-Dimensional Objects Using Stereo Computer Vision,” Proc. SPIE Int'l Symp. Intelligent Systems and Advanced Manufacturing, 1995.
[16] K. Tarabanis, R. Tsai, and P. Allen, “Analytical Characterization of the Feature Detectability Constraints of Resolution, Focus, and Field-of-View for Vision Sensor Planning,” CVGIP Image Understanding, vol. 59, pp. 340-358, May 1994.
[17] F. Chaumette, S. Boukir, P. Bouthemy, and D. Juvin, “Structure from Controlled Motion,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 18, no. 5, pp. 492-504, May 1996.
[18] C. Smith and N. Papanikolopoulos, “Computation of Shape through Controlled Active Exploration,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 2,516-2,521, 1994.
[19] K.N. Kutulakos and C.R. Dyer, “Global Surface Reconstruction by Purposive Control of Observer Motion,” Artificial Intelligence, vol. 78, pp. 147-177, 1995.
[20] E. Marchand and F. Chaumette, “Controlled Camera Motions for Scene Reconstruction and Exploration,” Proc. Conf. Computer Vsision and Pattern Recognition, pp. 169-176, June 1996.
[21] R. Curwen, A. Blake, and A. Zisserman, “Real-Time Visual Tracking for Surveillance and Path Planning,” Proc. European Conf. Computer Vision, pp. 879-883, 1992.
[22] M. Daily, J. Harris, and K. Reiser, “An Operational Perception System for Cross-Country Navigation,” Proc. Conf. Computer Vision and Pattern Recognition, pp. 3-6, June 1988.
[23] K. Tarabanis, P. Allen, and R. Tsai, A Survey of Sensor Planning in Computer Vision IEEE Trans. Robotics and Automation, vol. 11, pp. 86-104, 1995.
[24] C. Connolly, “The Determination of Next Best Views,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 432-435, 1985.
[25] R. Pito and R. Bajcsy, “A Solution to the Next Best View Problem for Automated CAD Model Acquisition of Free-Form Objects Using Range Cameras,” Proc. SPIE Int'l Symp. Intelligent Systems and Advanced Manufacturing, 1995.
[26] R. Pito, “A Sensor Based Solution to the Next Best View Problem,” Proc. Int'l Conf. Pattern Recognition, pp. 941-945, Aug. 1996.
[27] J. Maver and R. Bajcsy,“Occlusions as a guide for planning the next view,” IEEE Trans. Pattern and Machine Intelligence, vol. 15, no. 5, pp. 417-433, May 1993.
[28] J. Banta, Y. Zhen, X. Wang, G. Zhang, M. Smith, and M. Abidi, “A 'Best-Next-View' Algorithm for Three-Dimensional Scene Reconstruction Using Range Cameras,” Proc. SPIE Int'l Symp. Intelligent Systems and Advanced Manufacturing, 1995.
[29] M. Milroy, C. Bradley, and G. Vickers, “Automated Laser Scanning Based on Orthogonal Cross Sections,” Machine Vision and Applications, vol. 9, pp. 106-118, 1996.
[30] D. Papdopoulos-Orfanos and F. Schmitt, “Automation of a 3-D Camera-Laser Triangulation Sensor,” Proc. Fourth European Conf. Rapid Prototyping, Paris, Oct. 1995.
[31] H. Zha, K. Morooka, T. Hasegawa, and T. Nagata, “Active Modeling of 3-D Objects: Planning on the Next Best Pose (NBP) for Acquiring Range Images,” Proc. Int'l Conf. Recent Advances in 3-D Digital Imaging and Modeling, Ottawa, Ontario, Canada, May 1997.
[32] X. Yuan, "A Mechanism of Automatic 3D Object Modeling," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 17, no. 3, pp. 307-311, Mar. 1995.
[33] R. Gonzalez, G. Cranitch, and J. Jo, "Academic Directions of Multimedia Education," Comm. ACM, Jan. 2000, pp. 89-95.
[34] M. Reed, P. Allen, and I. Stamos, “3-D Modeling from Range Imagery: An Incremental Method with a Planning Component,” Proc. Int'l Conf. Recent Advances in 3-D Digital Imaging and Modeling, Ottowa, Ontario, Canada, May 1997.
[35] K. Tarabanis, R.Y. Tsai, and A. Kaul, “Computing Occlusion-Free Viewpoints,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 18, no. 3, pp. 279-292, Mar. 1996.
[36] D. Papdopoulos-Orfanos and F. Schmitt, “Automatic 3-D Digitization Using a Laser Rangefinder with a Small Field of View,” Proc. Int'l Conf. Recent Advances in 3-D Digital Imaging and Modeling, Ottawa, Ontario, Canada, May 1997.
[37] R. Pito, “Characterization, Calibration, and Use of the Perceptron Laser Range Finder in a Controlled Environment,” Technical Report MS-CIS-95-05, Univ. of Pennsylvania GRASP Laboratory, Philadelphia, Jan. 1995.
[38] I.S. Kweon, R. Hoffman, and E. Krotkov, “Experimental Characterization of the Perceptron Laser Rangefinder,” Technical Report CMU-RI-TR-91-1, Robotics Inst., Carnegie Mellon Univ., Jan. 1991.
[39] P.J. Besl and N.D. McKay, "A Method for Registration of 3D Shapes," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239-256, Feb. 1992.
[40] Y. Chen and G. Medioni, “Object Modeling from Multiple Range Images,” Image and Vision Computing, vol. 10, no. 3, pp. 145-155, Apr. 1992.
[41] B. Curless and M. Levoy, “A Volumetric Method for Building Complex Models from Range Images,” Proc. SIGGRAPH '96, pp. 303-312, 1996.
[42] A. Hilton et al., "Marching Triangles: Range Image Fusion for Complex Object Modelling," Proc. Int'l Conf. Image Processing, IEEE Computer Soc. Press, Los Alamitos, Calif., 1996.
[43] M. Rutishauser, M. Stricker, and M. Trobina, "Merging Range Images of Arbitrarily Shaped Objects," Proc. IEEE Computer Vision and Pattern Recognition, pp. 573-580, 1994.
[44] G. Turk and M. Levoy, “Zippered Polygon Meshes from Range Images,” Proc. SIGGRAPH '94, pp. 311-318, 1994.
[45] R. Bergevin, M. Soucy, H. Gagnon, and D. Laurendeau, “Towards a General Multi-View Registration Technique,” IEEE Trans. Pattern Analysis and Machine Intelligence vol. 18, no. 5, pp. 540-547, May 1996.
[46] R. Pito, “A Registration Aid,” Proc. Int'l Conf. Recent Advances in 3D Imaging and Modeling, May 1997.
[47] Y. Chen and J. Ni, “Dynamic Calibration and Compensation of a 3-D Laser Radar Scanning System,” IEEE Trans. Robotics and Automation, vol. 9, no. 3, pp. 318-323, 1993.
[48] R. Pito, “Mesh Integration Based on Commeasurement,” Proc. Int'l Conf. Image Processing, vol. II, pp. 397-400, Sept. 1996.
[49] W.J. Schroeder, J.A. Zarge, and W.E. Lorensen, “Decimation of Triangle Meshes,” Proc. SIGGRAPH '92, pp. 65-70, 1992.
[50] M. Levoy and P. Hanrahan, “Light Field Rendering,” Proc. SIGGRAPH '96, pp. 31-42, 1996.
[51] B.K. Horn, Robot Vision. Cambridge, Mass.: MIT Press, 1986.

Index Terms:
Active vision, next best view, sensor planning, range imaging, reverse engineering, automated surface acquisition, model acquisition.
Citation:
Richard Pito, "A Solution to the Next Best View Problem for Automated Surface Acquisition," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 10, pp. 1016-1030, Oct. 1999, doi:10.1109/34.799908
Usage of this product signifies your acceptance of the Terms of Use.