2011 IEEE 35th Annual Computer Software and Applications Conference (2011)
July 18, 2011 to July 22, 2011
Model-based testing attempts to generate test cases from a model focusing on relevant aspects of a given system under consideration (SUC). When SUC becomes too large to be modeled in a single step, existing design techniques usually require a modularization of the modeling process. Thereby, the refinement process results in a decomposition of the model into several hierarchical layers. Conventional testing requires the refined components be completely replaced by these subcomponents for test case generation. Mostly, this resolution of components leads to an oversized, large model where test case generation becomes very costly, and the generated test case set is very large leading to infeasible long test execution time. To solve these problems, we present a new strategy to reduce (i) the number of test cases, and (ii) the costs of test case generation and test execution. For determining the trade-off due to this cost reduction, the reliability achieved by the new approach is compared with the reliability of the conventional approach. A case study based on a large web-based commercial system validates the approach and discusses its characteristics. We found out that the new approach could detect about 80% of the faults for about 20% of the test effort compared with the conventional approach.
model-based testing, model refinement, event sequence graphs, software reliability, assignment problem
M. Linschulte, F. Belli and N. Güler, "Does "Depth" Really Matter? On the Role of Model Refinement for Testing and Reliability," 2011 IEEE 35th Annual Computer Software and Applications Conference(COMPSAC), Munich, Germany, 2011, pp. 630-639.