This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Automatic Sensor Placement from Vision Task Requirements
May 1988 (vol. 10 no. 3)
pp. 407-416

The problem of automatically generating the possible camera locations for observing an object is defined, and an approach to its solution is presented. The approach, which uses models of the object and the camera, is based on meeting the requirements that: the spatial resolution be above a minimum value, all surface points be in focus, all surfaces lie within the sensor field of view and no surface points be occluded. The approach converts each sensing requirement into a geometric constraint on the sensor location, from which the three-dimensional region of viewpoints that satisfies that constraint is computed. The intersection of these regions is the space where a sensor may be located. The extension of this approach to laser-scanner range sensors is also described. Examples illustrate the resolution, focus, and field-of-view constraints for two vision tasks.

[1] C. K. Cowan and P. D. Kovesi, "Automatic sensor placement from vision task requirements," Robotics Lab., SRI International, Menlo Park, CA, Tech. Rep., Dec. 1986.
[2] E. P. Krotkov, "Focusing," Univ. Pennsylvania, Philadephia, PA, Tech. Rep. MS-CIS-86-22, 1986.
[3] D. J. Elzinga and D. W. Hearn, "The minimum covering sphere problem,"Management Sci., vol. 19, no. 1, pp. 96-104, Sept. 1972.
[4] J. J. Koenderink and A. J. van Doorn, "The internal representation of solid shape with respect to vision,"Biol. Cybern., vol. 32, pp. 211-216, 1979.
[5] G. Castore and C. Crawford, "From solid model to robot vision," inProc. IEEE Int. Conf. Robotics Automat., 1984, pp. 90-92.
[6] W. H. Plantinga and C. R. Dyer, "An algorithm for constructing the aspect graph," Tech. Rep. 627, Dep. Comput. Sci., Univ. Wisconsin, Madison, WI, Dec. 1985.
[7] I. Chakravarty, "The use of characteristic views as a basis for recognition of three-dimensional objects," Ph.D. dissertation, Image Processing Lab., Dep. Elec., Comput.. Syst. Eng., Rensselaer Polytechnic Inst., Troy, NY, Oct. 1982.
[8] R. A. Schumacker, B. Brand, M. Gilliland, and W. Sharp, "Study for applying computer-generated images to visual simulation," U.S. Air Force Human Resources Lab., Tech. Rep. AFHRL-TR-69-14, Sept. 1969.
[9] H. Fuchs, Z. M. Kedem, and B. F. Naylor, "On visible surface generation bya prioritree structures,"Comput. Graph., vol. 14, no. 3, pp. 124-133, July 1980.
[10] D. Nitzan, A. E. Brain, and R. O. Duda, "The measurement and use of registered reflectance and range data in scene analysis,"Proc. IEEE, vol. 65, pp. 206-220, 1977.
[11] V. L. Larrowe, "Operating principles of laser-ranging, image-producing (3D) sensors," ERIM, Ann Arbor, MI, Tech. Rep. 628109- 1-X, Dec. 1986.
[12] R. Benton and D. Waters, "Intelligent task automation, Phase I, Final report," Honeywell Inc., Tech. Rep. AFWAL-TR-86-4122, Dec. 1986.

Index Terms:
computer vision; camera locations; spatial resolution; field of view; geometric constraint; sensor location; computer vision; computerised picture processing
Citation:
C.K. Cowan, P.D. Kovesi, "Automatic Sensor Placement from Vision Task Requirements," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 10, no. 3, pp. 407-416, May 1988, doi:10.1109/34.3905
Usage of this product signifies your acceptance of the Terms of Use.