The Community for Technology Leaders
RSS Icon
Issue No.09 - Sept. (2013 vol.39)
pp: 1230-1244
Lin Padgham , RMIT University, Melbourne
Zhiyong Zhang , RMIT University, Melbourne
John Thangarajah , RMIT University, Melbourne
Tim Miller , University of Melbourne, Melbourne
Software testing remains the most widely used approach to verification in industry today, consuming between 30-50 percent of the entire development cost. Test input selection for intelligent agents presents a problem due to the very fact that the agents are intended to operate robustly under conditions which developers did not consider and would therefore be unlikely to test. Using methods to automatically generate and execute tests is one way to provide coverage of many conditions without significantly increasing cost. However, one problem using automatic generation and execution of tests is the oracle problem: How can we automatically decide if observed program behavior is correct with respect to its specification? In this paper, we present a model-based oracle generation method for unit testing belief-desire-intention agents. We develop a fault model based on the features of the core units to capture the types of faults that may be encountered and define how to automatically generate a partial, passive oracle from the agent design models. We evaluate both the fault model and the oracle generation by testing 14 agent systems. Over 400 issues were raised, and these were analyzed to ascertain whether they represented genuine faults or were false positives. We found that over 70 percent of issues raised were indicative of problems in either the design or the code. Of the 19 checks performed by our oracle, faults were found by all but 5 of these checks. We also found that 8 out the 11 fault types identified in our fault model exhibited at least one fault. The evaluation indicates that the fault model is a productive conceptualization of the problems to be expected in agent unit testing and that the oracle is able to find a substantial number of such faults with relatively small overhead in terms of false positives.
BDI agents, Test oracles, unit testing
Lin Padgham, Zhiyong Zhang, John Thangarajah, Tim Miller, "Model-Based Test Oracle Generation for Automated Unit Testing of Agent Systems", IEEE Transactions on Software Engineering, vol.39, no. 9, pp. 1230-1244, Sept. 2013, doi:10.1109/TSE.2013.10
[1] S.S. Benfield, J. Hendrickson, and D. Galanti, "Making a Strong Business Case for Multiagent Technology," Proc. Fifth Int'l Joint Conf. Autonomous Agents and Multiagent Systems, pp. 10-15, 2006.
[2] R.V. Binder, Testing Object-Oriented Systems: Models, Patterns, and Tools. Addison-Wesley, 1999.
[3] R.H. Bordini, M. Wooldridge, and J.F. Hübner, Programming Multi-Agent Systems in AgentSpeak Using Jason. John Wiley & Sons, 2007.
[4] P. Bresciani, A. Perini, P. Giorgini, F. Giunchiglia, and J. Mylopoulos, "Tropos: An Agent-Oriented Software Development Methodology," Autonomous Agents and Multi-Agent Systems, vol. 8, no. 3, pp. 203-236, 2004.
[5] I. Burnstein, Practical Software Testing. Springer-Verlag, 2002.
[6] P. Burrafato and M. Cossentino, "Designing a Multi-Agent Solution for a Bookstore with the PASSI Methodology," Proc. Fourth Int'l Bi-Conf. Workshop Agent-Oriented Information Systems, pp. 27-28, 2002.
[7] G. Caire, M. Cossentino, A. Negri, A. Poggi, and P. Turci, "Multi-Agent Systems Implementation and Testing," Proc. Fourth Int'l Symp.: From Agent Theory to Agent Implementation, 2004.
[8] R. Coelho, E. Cirilo, U. Kulesza, A. von Staa, A. Rashid, and C. Lucena, "JAT: A Test Automation Framework for Multi-Agent Systems," Proc. Int'l Conf. Software Maintenance, pp. 425-434, Oct. 2007.
[9] D.M. Cohen, S.R. Dalal, M.L. Fredman, and G.C. Patton, "The AETG System: An Approach to Testing Based on Combinatorial Design," IEEE Trans. Software Eng., vol. 23, no. 7, pp. 437-444, July 1997.
[10] S.A. DeLoach, "Multiagent Systems Engineering of Organization-Based Multiagent Systems," Proc. Fourth Int'l Workshop Software Eng. for Large-Scale Multi-Agent Systems, pp. 1-7, 2005.
[11] R. DeMillo, R. Lipton, and F. Sayward, "Hints on Test Data Selection: Help for the Practicing Programmer," Computer, vol. 11, no. 4, pp. 34-41, Apr. 1978.
[12] O. Dikenelli, "SEAGENT MAS Platform Development Environment," Proc. Seventh Int'l Joint Conf. Autonomous Agents and Multi-Agent Systems: Demo Papers, pp. 1671-1672, 2008.
[13] Y. Kissoum and Z. Sahnoun, "A Recursive Colored Petri Nets Semantics for AUML as Base of Test Case Generation," Proc. IEEE/ACS Int'l Conf. Computer Systems and Applications, pp. 785-792, 2008.
[14] H. Knublauch, "Extreme Programming of Multi-Agent Systems," Proc. Int'l Joint Conf. Autonomous Agents and Multi-Agent Systems, pp. 704-711, 2002.
[15] C.K. Low, T.Y. Chen, and R. Ronnquist, "Automated Test Case Generation for BDI Agents," Autonomous Agents and Multi-Agent Systems, vol. 2, no. 4, pp. 311-332, 1999.
[16] T. Miller, L. Padgham, and J. Thangarajah, "Test Coverage Criteria for Agent Interaction Testing," Proc. Int'l Conf. Agent-Oriented Software Eng., pp. 91-105, 2011.
[17] G.J. Myers, C. Sandler, T. Badgett, and T.M. Thomas, The Art of Software Testing, second ed. Wiley, June 2004.
[18] C.D. Nguyen, A. Perini, and P. Tonella, "Goal Oriented Testing for Multi Agent Systems," Int'l J. Agent-Oriented Software Eng., vol. 4, pp. 79-109, Dec. 2010.
[19] C.D. Nguyen, S. Miles, A. Perini, P. Tonella, M. Harman, and M. Luck, "Evolutionary Testing of Autonomous Software Agents," Autonomous Agents and Multi-Agent Systems, vol. 25, no. 2, pp. 260-283, 2012.
[20] L. Padgham and M. Winikoff, Developing Intelligent Agent Systems: A Practical Guide. John Wiley & Sons, 2004.
[21] L. Padgham, M. Winikoff, and D. Poutakidis, "Adding Debugging Support to the Prometheus Methodology," Eng. Applications of AI, vol. 18, no. 2, pp. 173-190, 2005.
[22] A. Pokahr, L. Braubach, and W. Lamersdorf, "Jadex: Implementing a BDI-Infrastructure for JADE Agents," EXP—In Search of Innovation, vol. 3, no. 3, pp. 76-85, 2003.
[23] D. Poutakidis, L. Padgham, and M. Winikoff, "Debugging Multi-Agent Systems Using Design Artifacts: The Case of Interaction Protocols," Proc. First Int'l Joint Conf. Autonomous Agents and Multi-Agent Systems, 2002.
[24] M. Staats, M.W. Whalen, and M.P.E. Heimdahl, "Programs, Tests, and Oracles: The Foundations of Testing Revisited," Proc. 33rd Int'l Conf. Software Eng., pp. 391-400, 2011.
[25] P. Tahchiev, F. Leme, V. Massol, and G. Gregory, JUnit in Action, second ed. Manning Publications Co., 2010.
[26] G. Tassey, "The Economic Impacts of Inadequate Infrastructure for Software Testing," technical report, Nat'l Inst. of Standards and Tech nology, 2002.
[27] A.M. Tiryaki, S. Öztuna, O. Dikenelli, and R.C. Erdur, "Sunit: A Unit Testing Framework for Test Driven Development of Multi-Agent Systems," Proc. Int'l Conf. Agent-Oriented Software Eng. VI, pp. 156-173, 2006.
[28] M. Winikoff, "JACK Intelligent Agents: An Industrial Strength Platform," Multi-Agent Programming, vol. 15, pp. 175-193, Springer, 2005.
[29] M. Wooldridge, N.R. Jennings, and D. Kinny, "The Gaia Methodology for Agent-Oriented Analysis and Design," Autonomous Agents and Multi-Agent Systems, vol. 3, pp. 285-312, 2000.
[30] Z. Zhang, "Automated Unit Testing of Agent Systems," PhD thesis, School of Computer Science, RMIT Univ., Melbourne, Australia, 2011.
[31] Z. Zhang, J. Thangarajah, and L. Padgham, "Automated Testing for Intelligent Agent Systems," Proc. Int'l Conf. Agent-Oriented Software Eng., M.-P. Gleizes and J. Gomez-Sanz, eds., pp. 66-79, 2011.
18 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool