Model-based testing

Automatically derive test cases from behavioural models of a system/component to check for correct behaviour.
Derive tests from (semi-)formal behaviour models or test models. In general, these tests are functional tests but solutions to test non-functional properties like performance or robustness are available.

Model-Based Testing is an approach for testcase generation for black-box testing.

Model-based mutation testing was pioneered by Lipton in 1971. Since there have been several approaches described in literature, using several modelling formalisms and several commercial tools are available as well. [MBT1] gives an overview and a taxonomy of approaches. The web page at [MBT2] collects literature and tool references on the topics. Many applications of the approach are in the safety critical systems domain (see [MBT3] for details), probably because there the additional effort of creating a sufficiently complete model for testing is easier to argue.

The model-based mutation testing technique uses the input model to create a number of mutants, which differ from the original model in tiny details. The goal is then to find tests that differentiate the mutant from the original. These tests can then be used to test the implementation of the model.

An example would be a UML state machine that represents the behaviour of a car alarm system. The model would arm the alarm when the doors are lockend and raise an alarm when a door is open before the car is unlocked. This model could be used to derive tests over the input/output behaviour of the alarm system. These tests can be used to test a real-world implementation of the alarm system.

  • Systematic generation of test cases, ensuring a consistent degree of test quality
  • Model updates for changed or new requirements are in most cases easier and faster to do than updating several hundreds or thousands of tests that might be affected. From the updated model, corrected tests can be generated. Many test case generators strive to leave unrelated tests unaffected.
  • Optimised test suites can achieve the same test quality with less test execution efforts.
  • Testing is an inherently incomplete approach. Generated, high quality tests cannot change this.
  • The effort of creating a test model is often seen as an otherwise unnecessary effort. It can be balanced with reduced test design efforts.
  • The quality of the generated tests depends not only on the model and the tool, but also on the coverage criterion used to drive the generation of the tests.
  • Factoring in the model can cause less tests to be generated than needed to test an implementation where these parts have been defactored.
Method Dimensions
In-the-lab environment, Open evaluation environment, Closed evaluation environment
Experimental - Testing
Hardware, Model, Software
Integration testing, Unit testing, System testing
Thinking, Acting, Sensing
Functional
V&V process criteria
Relations
Contents

There are currently no items in this folder.