Interface fault injection

Interface fault injection, frequently known as robustness testing, consists in the injection of faults at the interface of components (OS calls, APIs, services…) through data corruption at interface level [IFI1], with the purpose of evaluating the behaviour of the system or component under test in the presence of invalid inputs or stressful interface conditions [IFI42]

Interface fault injection (or robustness testing) requires that the system/component under test faces erroneous input conditions, which are usually defined based on typical developer mistakes or wrong assumptions. Erroneous input conditions can be also generated at random in some robustness testing scenarios. In a more general fault injection context, erroneous inputs injected at the interface of a given component can represent failures in preceding components that forward their erroneous outputs to the target component.  

Information regarding the system interface (e.g., a WSDL document in case of SOAP [IFI3] web services, or an OpenAPI document in case of REST services) is normally used as input for the generation of the set of invalid inputs, which are combined with valid parameters and sent in requests to the system under test [IFI4]. Examples of invalid parameters are cases null, empty, and boundary values, strings in special formats, or even malicious values. System responses are inspected for suspicious cases of failure (e.g., the presence of exceptions in the response or, response codes referring to internal server errors) and should be analysed regarding their severity.

  • Current approaches are the result of years of research; thus, they are mature and potentially allow for easy application to specific systems.

  • Low effort required for the generation of robustness test cases, due to a black-box approach and the automatic generation of test cases   

  • Easiness of use and integration in current tools, due to a black-box approach and just requiring adapting existing tools to pass valid and invalid inputs to the interface 

  • Classification of results is highly dependent on expert knowledge

  • The quality of generated workloads by tools often limits the disclosure of robustness problems

  • [IFI1] N. Laranjeiro, M. Vieira and H. Madeira, "Experimental Robustness Evaluation of JMS Middleware," 2008 IEEE International Conference on Services Computing, Honolulu, HI, 2008, pp. 119-126, doi: 10.1109/SCC.2008.129.

  • [IFI2] J. Cámara, R. de Lemos, N. Laranjeiro, R. Ventura and M. Vieira, "Robustness-Driven Resilience Evaluation of Self-Adaptive Software Systems," in IEEE Transactions on Dependable and Secure Computing, vol. 14, no. 1, pp. 50-64, 1 Jan.-Feb. 2017, doi: 10.1109/TDSC.2015.2429128.

  • [IFI3] N. Laranjeiro, M. Vieira, & H. Madeira, A robustness testing approach for SOAP Web services. J Internet Serv Appl 3, 215232 (2012). https://doi.org/10.1007/s13174-012-0062-2

  • [IFI4] N. Laranjeiro, M. Vieira and H. Madeira, "A Technique for Deploying Robust Web Services," in IEEE Transactions on Services Computing, vol. 7, no. 1, pp. 68-81, Jan.-March 2014, doi: 10.1109/TSC.2012.39.

Method Dimensions
In-the-lab environment
Experimental - Testing
Software
Operation
Thinking
Non-Functional - Other
V&V process criteria, SCP criteria
Relations
Contents

There are currently no items in this folder.