This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Three-Dimensional Vision Structure for Robot Applications
May 1988 (vol. 10 no. 3)
pp. 291-309

Focuses on the structure of robot sensing systems and the techniques for measuring and preprocessing 3-D data. To get the information required for controlling a given robot function, the sensing of 3-D objects is divided into four basic steps: transduction of relevant object properties (primarily geometric and photometric) into a signal; preprocessing the signal to improve it; extracting 3-D object features; and interpreting them. Each of these steps usually may be executed by several alternative techniques (tools). Tools for the transduction of 3-D data and data preprocessing are surveyed. The performance of each tool depends on the specific vision task and its environmental conditions, both of which are variable. Such a system includes so-called tool-boxes, one box for each sensing step, and a supervisor, which controls iterative sensing feedback loops and consists of a rule-based program generator and a program execution controller. Sensing step sequences and tools are illustrated for two 3-D vision applications at SRI International Company: visually guided robot arc welding and locating identical parts in a bin.

[1] M. D. Altschuleret al., "The numerical stereo camera," inProc. Soc. Photo-Opt. Instrum. Eng. Conf. 3-D Mach. Perception, SPIE, Bellingham, WA, 1981, Vol. 283, pp. 15-24.
[2] D. H. Ballard and C. M. Brown,Computer Vision. Englewood Cliffs, NJ: Prentice-Hall, 1982.
[3] P. J. Besl and R. C. Jain, "Three-dimensional object recognition,"ACM Comput. Surveys, vol. 17, no. 1, pp. 75-145, Mar. 1985.
[4] R. C. Bolles and R. A. Cain, "Recognizing and locating partially visible objects: The local-feature-focus method,"Int. J. Robot. Res., vol. 1, no. 3, pp. 57-82, Fall 1982.
[5] R. Horaud and R.C. Bolles, "3DPO's strategy for matching three-dimensional objects in range data," inProc. Int. Conf. Robotics, (Atlanta, GA), Mar. 1984, pp. 78-85.
[6] R. C. Bolles and P. Horaud, "3DPO: A three dimensional part orientation svstem,"Int. J. Robotics Res., vol. 5, no. 3, Fall 1986, pp. 3-26.
[7] C. K. Cowan and P. D. Kovesi, "Automatic sensor placement from vision task requirements,"IEEE Trans. Pattern Anal. Mach. Intell.1987.
[8] M. A. Fischler and R. C. Bolles, "Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography," SRI International, Menlo Park, CA, A. I. Center Tech. Note 213, Mar. 1980.
[9] D. B. Gennery, "Tracking known three-dimensional objects," inProc. AAAI-82, Aug. 18-20, 1982, pp. 13-17.
[10] F. E. Goodwin, "Coherent laser radar 3-D vision sensor," Society of Manufacturing Engineers, Dearborn, MI, SME Tech. Paper MS85- 1005, 1985.
[11] M. Hebert and T. Kanade, "First results on outdoor scene analysis using range data," inProc. Image Understanding Workshop, Miami, FL, Dec. 1985.
[12] B. P. K. Horn, "Shape from shading," inThe Psychology of Computer Vision, P. H. Winston, Ed. New York: McGraw-Hill, 1975.
[13] K. Ikeuchi, "Determining surface orientation of specular surfaces by using photometric stereo method,"IEEE Trans. Pattern Anal. Mach. Intell., vol. PAMI-3, pp. 661-669, Nov. 1981.
[14] R. A. Jarvis, "A perspective on range finding techniques for computer vision,"IEEE Trans. Pattern Anal. Mach. Intell., vol. PAMI- 5, pp. 122-139, Mar. 1983.
[15] A. R. Johnston, "Infrared laser rangefinder," Jet Propulsion Lab., NASA New technology, Pasadena, CA, Rep. NPO-13460, Aug. 1973.
[16] J. R. Kender, "Saturation, hue, and normalized color: Calculation, digitization effects, and use," Dep. Comput. Sci. Carnegie-Mellon Univ., Pittsburgh, PA, Nov. 1976.
[17] H. Kitagawaet al., "Extraction of surface gradient from shading images,"Trans. IEEE Japan, vol. J66, no. 1, pp. 65-72, 1983.
[18] K. Koshikawa, "A method of finding surface orientation by polarization analysis of reflected light,"Trans. SICE, vol. 18, no. 10, pp. 77-79, 1982.
[19] J. H. Kremerset al., "Development of a prototype robotic arc welding work station," SRI International, Menlo Park, CA, NAVSEA Rep. S50002-83, May 1983.
[20] J. H. Kremerset al., "Development of a prototype robotic arc welding work station," SRI International, Menlo Park, CA, NAVSEA Rep. S50002- 83-2, May 1985.
[21] E. Krotkov and J. P. Martin, "Range from focus," inProc. 1986 IEEE Robot. Automat., San Francisco, CA, Apr. 1986, pp. 1093- 1098.
[22] R. K. Miller,3-D Machine Vision. Madison, GA: SEAI Technical Publications, 1984.
[23] P. G. Mulgaonkar and L. G. Shapiro, "Hypothesis-based geometric reasoning about perspective images," inProc. IEEE Third Workshop Comput. Vision: Representation Cont., Bellaire, MI, Oct. 13-16, 1985, pp. 11-18.
[24] D. Nitzanet al., "The measurement and use of registered reflectance and range data in scene analysis," inProc. IEEE Trans. Pattern Anal. Mach. Intell., vol. 65, pp. 206-220 Feb. 1977.
[25] D. Nitzan, C. Barrouil, P. Cheeseman, and R. C. Smith, "Use of sensors in robot systems," inProc. '83 Int. Conf. Adv. Robot., Tokyo, Japan, Sept. 12-13, 1983.
[26] Y. Ohta, T. Kanade, and T. Sakai, "Color information for region segmentation,"Comput. Graphics Image Procesing, vol. 13, pp. 222- 241, 1980.
[27] Y. Ohtaet al., "Obtaining surface orientation from texels under perspective projection," inProc. 7th IJCAI, Vancouver, B.C., Canada, 1981, pp. 746-751.
[28] A. P. Pentland, "Local shading analysis,"IEEE Trans. Pattern Anal. Mach. Intell., vol. PAMI-6, pp. 170-187, Mar. 1984.
[29] A. P. Pentland, "A new sense for depth of field," inProc. 9th IJCAI, Los Angeles, CA, Aug. 1985, pp. 988-994.
[30] W. K. Pratt, "Spatial transform coding of color images,"IEEE Trans. Commun. Technol., vol. COM-19, pp. 980-992, Dec. 1971.
[31] C. A. Rosen, "Machine vision and robotics: Industrial requirements," inComputer Vision and Sensor-Based Robots, Dodd and Rossol, Eds. New York: Plenum, 1979.
[32] A. Rosenfeld and A. C. Kak,Digital Image Processing. New York: Academic, 1976.
[33] K. A. Stevens, "Representing and analyzing surface orientation," inArtificial Intelligence: An MIT Perspective, Vol. 2, P. H. Winston and R. H. Brown, Eds. Cambridge, MA: M.I.T. Press, 1979.
[34] S. Tsujiet al., "Wiresight: Robot vision for determining three-dimensional geometry of flexible wires," inProc. '83 Int. Conf. Adv. Robot., Tokyo, Japan, Sept. 12-13, 1983, pp. 133-138.
[35] A. P. Witkin, "Recovering surface shape and orientation from texture,"Artif. Intell., vol. 17, 1981.
[36] R. J. Woodham, "Photometric stereo: A reflectance map technique for determining surface orientation from image intensity," inProc. 22nd Int. Symp. SPIE, San Diego, CA, Aug. 1978, pp. 136-143.
[37] R. J. Woodham, "Analyzing images of curved surfaces,"Arti. Intell., vol. 17, pp. 117-140, 1981.

Index Terms:
3D vision structure; computer vision; robot vision; parts picking; computerised pattern recognition; data preprocessing; rule-based program; arc welding; computer vision; computerised pattern recognition; robots
Citation:
D. Nitzan, "Three-Dimensional Vision Structure for Robot Applications," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 10, no. 3, pp. 291-309, May 1988, doi:10.1109/34.3895
Usage of this product signifies your acceptance of the Terms of Use.