This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Task-Oriented Generation of Visual Sensing Strategies in Assembly Tasks
February 1998 (vol. 20 no. 2)
pp. 126-138

Abstract—This paper describes a method of systematically generating visual sensing strategies based on knowledge of the assembly task to be performed. Since visual sensing is usually performed with limited resources, visual sensing strategies should be planned so that only necessary information is obtained efficiently. The generation of the appropriate visual sensing strategy entails knowing what information to extract, where to get it, and how to get it. This is facilitated by the knowledge of the task, which describes what objects are involved in the operation, and how they are assembled. In the proposed method, using the task analysis based on face contact relations between objects, necessary information for the current operation is first extracted. Then, visual features to be observed are determined using the knowledge of the sensor, which describes the relationship between a visual feature and information to be obtained. Finally, feasible visual sensing strategies are evaluated based on the predicted success probability, and the best strategy is selected. Our method has been implemented using a laser range finder as the sensor. Experimental results show the feasibility of the method, and point out the importance of task-oriented evaluation of visual sensing strategies.

[1] J. Aloimonos, "Purposive and Qualitative Active Vision," Proc. Image Understanding Workshop, pp. 816-828, 1990.
[2] Y. Aloimonos, ed., Active Perception.Hilldale, N.J.: Lawrence Erlbaum Associates, Inc., 1993.
[3] N. Ayache and O. Faugeras, “Maintaining Representations of the Environment of a Mobile Robot,” IEEE Trans. Robotics and Automation, vol. 5, no. 6, pp. 804-819, 1989.
[4] R. Bajcsy, "Active Perception," Proc. IEEE, vol. 76, no. 8, 1988.
[5] D.H. Ballard, "Reference Frames for Animate Vision," Proc. Int'l Joint Conf. Artificial Intelligence, pp. 1,635-1,641, 1989.
[6] L. Birnbaum, M. Brand, and P. Cooper, "Looking for Trouble: Using Casual Semantics to Direct Focus of Attention," ICCV93 Proc. Fourth Int'l Conf. Computer Vision,Berlin, Germany, IEEE CS Press, pp. 49-56, May11-14, 1993.
[7] C.K. Cowan, "Automatic Camera and Light-Source Placement Using CAD Models," IEEE Workshop on Directions in Automated CAD-Based Vision, pp. 22-31, 1991.
[8] K.D. Gremban and K. Ikeuchi, "Planning Multiple Observations for Object Recognition," Int'l J. Computer Vision, vol. 12, no. 2, pp. 137-172, 1994.
[9] S. Hirai and H. Asada, "Kinematics and Statics of Manipulation Using the Theory of Polyhedral Convex Cones," Int'l J. Robotics Research, vol. 12, no. 5, pp. 434-447, 1993.
[10] I. D. Horswill, Specialization for Perceptual Processes, PhD thesis, Massachusetts Inst. of Tech nology, 1993.
[11] S.A. Hutchinson, "Exploiting Visual Constraints in Robot Motion Planning," Proc. IEEE Int'l Conf. Robotics and Automation, pp. 1,722-1,727, 1991.
[12] S. Hutchinson and A. Kak, “Planning Sensing Strategies in a Robot Work Cell with Multi-Sensor Capabilities,” IEEE Trans. Robotics and Automation, vol. 5, pp. 765-783, Dec. 1989.
[13] K. Ikeuchi and M. Hebert, "Task Oriented Vision," Technical Report CMU-CS-91-163, School of Computer Science, Carnegie Mellon Univ., July 1991.
[14] K. Ikeuchi and T. Kanade, "Towards Automatic Generation of Object Recognition Program," Proc. IEEE, vol. 76, no. 8, pp. 1,016-1,035, 1987.
[15] K. Ikeuchi and T. Suehiro, "Toward an Assembly Plan From Observation Part I: Task Recognition With Polyhedral Objects," IEEE Trans. Robotics and Automation, vol. 10, no. 3, pp. 368-385, 1994.
[16] K. Kemmotsu and T. Kanade, “Sensor Placement Design for Object Pose Determination with Three Light-Stripe Range Finders,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 1,357-1,364, San Diego, Calif., 1994.
[17] Y. Kuniyoshi and H. Inoue, "Qualitative Understanding of Ongoing Human Action Sequences," Proc. 13th Int'l Joint Conf. Artificial Intelligence, pp. 1,600-1,609,Chambéry, France, 1993.
[18] T. Lozano-Pérez, M.T. Mason, and R.H. Taylor, "Automatic Synthesis of Fine Motion Strategies for Robots," Int'l J. Robotics Research, vol. 3, no. 1, pp. 3-24, 1984.
[19] M.T. Mason, "Compliance and Force Control for Computer Controlled Manipulation," IEEE Trans. System, Man, and Cybernetics, vol. 11, no. 6, pp. 418-432, 1981.
[20] O. Ozeki, K. Higuchi, and S. Yamamoto, "Automated Dimension Inspection System for Automotive Plastic Parts With a Laser Probe," Proc. Robots 12 and Vision '88 Conf., pp. 5-51-5-60,Detroit, Mich., 1988.
[21] G.V. Paul and K. Ikeuchi, "Partitioning Contact-State Space Using the Theory of Polyhedral Convex Cones," Proc. IEEE Int'l Conf. Robotics and Automation, pp. 421-426, 1995.
[22] R.D. Rimey, "Control of Selective Perception Using Bayes Nets and Decision Theory," Technical Report 468, Computer Science Dept., Univ. of Rochester, Dec. 1993.
[23] S. Sakane and T. Sato, "Automatic Planning of Light Source and Camera Placement for an Active Photometric Stereo System," IEEE Int'l Conf. Robotics and Automation, pp. 1,080-1,087, 1991.
[24] V. Scheinman, "A Multiple Robot Vision Guided Assembly System," R. Bolles and B. Roth, eds., Robotics Research 4.Cambridge, Mass.: MIT Press, 1987.
[25] A.J. Spyridi and A.A.G. Requicha, "Automatic Programming of Coordinate Measuring Machines," Proc. IEEE Int'l Conf. Robotics and Automation, pp. 1,107-1,112,San Diego, Calif., 1994.
[26] K. Tarabanis, P. Allen, and R. Tsai, A Survey of Sensor Planning in Computer Vision IEEE Trans. Robotics and Automation, vol. 11, pp. 86-104, 1995.
[27] K.A. Tarabanis, R.Y. Tsai, and P.K. Allen, "The MVP Sensor Planning System for Robotic Vision Tasks," IEEE Trans. Robotics and Automation, vol. 11, no. 1, pp. 72-85, 1995.
[28] J.K. Tsotsos, "The Complexity of Perceptual Search Tasks," Proc. 11th Int'l Joint Conf. Artificial Intelligence, pp. 1,571-1,577, 1989.
[29] P. Whaite and F.P. Ferrie,“From uncertainty to visual exploration,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 13, no 10, pp. 1,038-1,050, Oct. 1991.
[30] C.C. Yang, M.M. Marefat, and R.L. Kashyap, "Active Visual Inspection Based on CAD Models," Proc. IEEE Int'l Conf. Robotics and Automation, pp. 1,120-1,125,San Diego, Calif., 1994.
[31] H. Zhang, "Optimal Sensor Placement," Proc. IEEE Int'l Conf. Robotics and Automation, pp. 1,825-1,830, 1992.

Index Terms:
Task-oriented vision, sensing planning, active vision, CAD-based vision, vision-based assembly.
Citation:
Jun Miura, Katsushi Ikeuchi, "Task-Oriented Generation of Visual Sensing Strategies in Assembly Tasks," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 2, pp. 126-138, Feb. 1998, doi:10.1109/34.659931
Usage of this product signifies your acceptance of the Terms of Use.