The Community for Technology Leaders
RSS Icon
Issue No.02 - March-April (2012 vol.27)
pp: 43-51
There is a common belief that making systems more autonomous will improve the system and is therefore a desirable goal. Though small scale simple tasks can often benefit from automation, this does not necessarily generalize to more complex joint activity. When designing today's more sophisticated systems to work closely with humans, it is important not only to consider the machine's ability to work independently through autonomy, but also its ability to support interdependence with those involved in the joint activity. We posit that to truly improve systems and have them reach their full potential, designing systems that support interdependent activity between participants is the key. Our claim is that increasing autonomy, even in a simple and benign environment, does not always result in an improved system. We will show results from an experiment in which we demonstrate this phenomena and explain why increasing autonomy can sometimes negatively impact performance.
team working, control engineering computing, mobile robots, software agents, interdependent activity, autonomy, interdependence, human-agent-robot teams, automation, complex joint activity, machine ability, Intelligent systems, Human-robot interaction, Information technology, Computational modeling, Automation, Human computer interaction, Performance evaluation, Man machine systems, user/machine systems, human-centered computing
M. Johnson, "Autonomy and interdependence in human-agent-robot teams", IEEE Intelligent Systems, vol.27, no. 2, pp. 43-51, March-April 2012, doi:10.1109/MIS.2012.1
1. Unmanned Systems Roadmap, 2007–2032, US Dept. of Defense, 2007;
2. A. Bleicher, "The Gulf Spill's Lessons for Robotics," IEEE Spectrum, vol. 47, no. 8, pp. 9–11.
3. M. Johnson et al., "Joint Activity Testbed: Blocks World for Teams (BW4T)," Proc. Eng. Societies in the Agents World X (ESAW 09), Springer, 2009, pp. 254–256.
4. T.B. Sheridan and W. Verplank, "Human and Computer Control of Undersea Teleoperators," Man-Machine Systems Laboratory, MIT Department of Mechanical Eng., MIT, 1978;
5. D.R. Olsen and M. Goodrich, "Metrics for Evaluating Human-Robot Interactions," Proc. Performance Metrics for Intelligent Systems (PerMIS 03), 2003, pp. 507–527.
6. R. Parasuraman, T. Sheridan, and C. Wickens, "A Model for Types and Levels of Human Interaction with Automation Systems," IEEE Trans. Man and Cybernetics, Part A, vol. 30, no. 3, 2000, pp. 286–297.
7. K. Stubbs, P. Hinds, and D. Wettergreen, "Autonomy and Common Ground in Human-Robot Interaction: A Field Study," IEEE Intelligent Systems, vol. 22, no. 2, 2007, pp. 42–50.
8. D.A. Norman, "The 'Problem' of Automation: Inappropriate Feedback and Interaction, Not 'Over-Automation,'" Human Factors in Hazardous Situations, D.E. Broadbent, A. Baddeley, and J.T. Reason eds., Oxford Univ. Press, 1990, pp. 585–593.
9. D.D. Woods, and N.B. Sarter, "Automation Surprises," Hand book of Human Factors & Ergonomics, G. Salvendy ed., John Wiley & Sons, 1997, pp. 1926–1943.
10. J.W., Crandall and M.L. Cummings, "Developing Performance Metrics for the Supervisory Control of Multiple Robots," Proc. 2nd ACM/IEEE Int'l. Conf. Human-Robot Interaction (HRI 07), ACM, 2007, pp. 33–40.
11. M. Johnson et al., "Beyond Cooperative Robotics: The Central Role of Interdependence in Coactive Design," IEEE Intelligent Systems, vol. 26, no. 3, 2011, pp. 81–88.
12. G. Klein et al., "Ten Challenges for Making Automation a 'Team Player' in Joint Human-Agent Activity," IEEE Intelligent Systems, vol. 19, no. 6, 2004, pp. 91–95.
13. G. Klein et al., "Common Ground and Coordination in Joint Activity," Organizational Simulation, W.B. Rouse, and K.R. Boff eds., John Wiley & Sons, 2005, pp. 139–184.
49 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool