Model-based mutation testing

Model-based mutation testing is a technique to automatically generated test cases from behaviour models. These test cases are generated such as to detect certain assumed faulty (i.e. "mutated") versions the specification.
Derive high quality tests from behaviour models.

Model-based mutation testing is a fault-based variant of Model-Based Testing, where the generated test cases are guaranteed to detect implementations of certain faulty versions of the specification. The idea here is to show that in implementing the system, the requirements were correctly understood and that the SUT is free of the faults that were injected into the specification. Faulty specifications are called mutants, hence the term model-based mutation testing.

The final output of the method is a high-quality test suite. Quality of test suites can be measured by different means. Mutation score [MMT1] is a test suite metric, quantifying the fault detection capability of a test suite. Formally, mutation score is defined as the number of killed mutants divided by the number of created mutants. Its benefit

is that it quantifies semantic quality features of implementations, i.e. absence of implementation faults, as opposed to syntactic quality features, such as transition, state, or branch coverage metrics. The idea of mutations for test quality analysis goes back to at least 1978 [MMT2].

  • The method can produce compact test suites (in relation to the complexity of the system under test) with high functional coverage.
  • The use of mutation coverage to drive test case generation orients itself on faults and guarantees that tests are generated where faults propagate into an observable deviation of the system behaviour. When using control-flow coverage to drive test-case generation, this is usually not achieved.
  • Too large symbolic or concrete state spaces of the system under test can make the approach computationally infeasible. Usually, good partial results can be achieved nonetheless, since missing coverage can be manually analysed, and testing is by conception an incomplete approach.
  • As a black box testing approach, the implementation might structure the behaviour differently than the specification model does. This can lead to additional potential faults in the implementation, that are not covered by the test suite. Model-Defactoring [MMT3] is a potential solution for this problem.
  • [MMT1] Y. Jia and M. Harman, ‘An Analysis and Survey of the Development of Mutation Testing’, IEEE Transactions on Software Engineering, vol. 37, no. 5, pp. 649–678, Sep. 2011, doi: 10/dd8s2k.
  • [MMT2] DeMillo, R.A., Lipton, R.J., Sayward, F.G.: Hints on test data selection: Help for the practicing programmer. IEEE Computer 11(4), 34–41 (1978)
  • [MMT3] Schlick R., Herzner W., Jöbstl E. (2011) Fault-Based Generation of Test Cases from UML-Models – Approach and Some Experiences. In: Flammini F., Bologna S., Vittorini V. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2011. Lecture Notes in Computer Science, vol 6894. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24270-0_20
Method Dimensions
In-the-lab environment, Open evaluation environment, Closed evaluation environment
Experimental - Testing
Hardware, Model, Software
Acceptance testing, Integration testing, Unit testing, System testing
Thinking, Acting, Sensing
Functional
V&V process criteria
Relations
Contents