Model-based robustness testing

Use of a behaviour model to derive stimuli out of specification and apply them to a system under test to check for its resoponse and to see whether it is robust. Due to employing a model of the behaviour, the out-of-specification inputs can also come after some sequence of perfectly valid inputs, allowing the method to "go deep" before switching to robustness testing.
Use an abstracted behaviour model of a component or system to derive unexpected or slightly out of specification stimuli in order to check the robustness of the artefact under test.

Robustness is defined by ANSI/IEEE as “the degree to which a system or component can function correctly in the presence of invalid inputs or stressful environmental conditions.” [MRT1]. Some authors include the graceful handling of internal error conditions in their definition of robustness. For our purposes, we stay with the ANSI/IEEE definition with challenges to robustness originating outside of the system.

The definition of what makes up valid inputs and normal environmental conditions can be considered as the precondition in a contract between the system and its environment. If the precondition is fulfilled, the system shall fulfil its obligations and operate as specified. In most settings, this contract is not or at least not completely formalised.

Robustness testing thereby is the experimental evaluation in how far the system operates as specified or acceptably close to specification (i.e. gracefully degrades [MRT2]), if this precondition is violated. If the system implementation takes more and incorrect assumptions than included in the contract, this should be instead addressed by sufficiently complete functional tests and stress tests that stay within the explicitly defined operation conditions.

Since there are usually infinite possibilities to violate the contract preconditions with unexpected inputs, it is a) impossible to be complete and a definition of test adequacy is needed to decide which tests to select and b) tests need to be generated automatically since manual test design is infeasible. The intentional and automated use of unspecified inputs is also called fuzzing [MRT3], especially when applied to security testing. Classic fuzzing originally uses randomized inputs in large test suites.

To generate inputs outside of the nominal inputs, a machine-readable specification of the allowed inputs is useful. Depending on the application area and test goals, it might be possible to reduce the test suite size by limiting the inputs to be only slightly out of specification (SooS) and close to some valid input, or not. E.g. in context of secure implementation of communication protocols, an approach based on detailed input specifications called grammar-based fuzzing and several approaches to derive inputs are taken, including mutation, machine learning and evolutionary computing [MRT4].

Model-Based Robustness Testing takes a (semi-)formal description of the expected system behaviour (i.e. the contract, although not necessarily in form of pre-condition/assumption and obligation/ guarantee but implicitly in a behaviour description) and a fault model how the precondition part of the contract could be violated. From this, both inputs and test oracles that decide if robustness properties and functional requirements hold, can be derived, see e.g. [MRT5]

In many systems, but especially in cyber-physical systems, the inner state of the system might affect how an unexpected input is treated. If a representation of the system behaviour is given as a (semi-) executable behaviour model, this can be used to drive the system under test into different states and fuzz the inputs there [MRT6].

A different approach of Model-Based Robustness Testing for image processing applications is described in [MRT7], where based on a definition of the input situations and possible image processing problems test images and possibly the related ground truth is generated.

  • Using a behaviour model, the method can use coverage driven testing to reach interesting system states and provide out-of-specification inputs there.
  • Depends on a behaviour model
  • For very complex systems, it is often just not feasible to run a robustness test suite of the size objectively needed.
  • [MRT1] "Standard Glossary of Software Engineering Terminology (ANSI)". The Institute of Electrical and Electronics Engineers Inc. 1991
  • [MRT2] R. Bloem, K. Chatterjee, K. Greimel, T. A. Henzinger, and B. Jobstmann, ‘Specification-centered robustness’, in 2011 6th IEEE International Symposium on Industrial and Embedded Systems, Vasteras, Sweden, Jun. 2011, pp. 176–185, doi: 10/cwfvw7.
  • [MRT3] A. Takanen, J. DeMott, C. Miller, and A. Kettunen, Fuzzing for software security testing and quality assurance. 2018.
  • [MRT4] H. A. Salem and J. Song, ‘A Review on Grammar-Based Fuzzing Techniques’, p. 10, 2019.
  • [MRT5] A. Savary, M. Frappier, M. Leuschel, and J.-L. Lanet, ‘Model-Based Robustness Testing in Event-B Using Mutation’, in Software Engineering and Formal Methods, vol. 9276, R. Calinescu and B. Rumpe, Eds. Cham: Springer International Publishing, 2015, pp. 132–147.
Method Dimensions
In-the-lab environment, Open evaluation environment, Closed evaluation environment
Experimental - Testing
Hardware, Model, Software
System testing, Integration testing, Unit testing, Acceptance testing
Thinking, Acting, Sensing
Functional
V&V process criteria
Relations
Contents

There are currently no items in this folder.