Runtime Verification Based on Formal Specification

Formally specify monitors to capture properties of runtime observations and verify them during the execution of the system. A special focus is given to properties that are impractical when using static techniques (e.g., model checking and theorem proving, due to the size of complexity of the analysis, and when precise data–typically related to non-functional properties–is only available upon execution time).

With the complexity of today's automated systems continuously increasing, the current state-of-the-art of exhaustive verification techniques, in general, takes impracticable time to compute. Runtime Monitoring (RM) and Runtime Verification (RV) techniques are lightweight alternatives that make use of monitors which are small executable components that observe the behaviours of a target system to determine if those behaviours satisfy a set of associated specifications.

RM has been around for more than 30 years and has been used in several scopes of analysis of traces produced by a running system, including profiling, performance analysis, fault-detection, among other relevant aspects that ensure the well-being of the system. The concept of monitor is central in RM, as it is the (software or hardware) computational entity that collects the data traces from the system and proceeds with the aforementioned analysis. RV is an evolution of RM, where monitors are synthesised from formal specifications in an automatic way, via specialized synthesis algorithms. Hence, the monitors adopted in RV provide further guarantees over the monitors employed in RM, since the former will observe the precise meaning of a given specification. A broad overview can be found, e.g., in [RVF1-RVF4].

RV techniques typically consider formalisms such as temporal logics, state machines, and regular expressions, focusing mainly on monitor synthesis. Different flavours of temporal logic are considered in RV to verify components viewed as black boxes, such as LTL [RVF5], interpreted over finite traces [RVF6], and its extensions such as MITL [RVF7], TLTL [RVF6], STL [RVF8], and RMTLD [RVF12, RVF13].

RV for LTL has been recently extended also to consider assumptions on the system behaviours, specified as a formal model describing the execution traces that the system is expected to follow [RVF9]. Assumptions allow the monitors to be more precise, emitting a conclusive verdict before the corresponding monitor without assumption, and predictive, outputting conclusive verdicts before it actually sees it from the input trace or events.  Typically, RV implies that monitors are orchestrated on a system’s software architecture so that they can be coupled, observe the system’s execution, and identify, during runtime, aspects that were not foreseen during the design-phase or errors that could not be proved to be absent via static verification methods [RVF11].

An important challenge in the context of RV is related to the expressiveness of the assumptions and how to use model checking techniques for infinite-state or timed systems at runtime to explore the belief state space.

  • Verification, during execution time, of properties of the target system that are hard or impossible to capture during design time
  • Lightweight scalable application of formal methods
  • Rigorous specification of monitoring properties
  • The use of runtime monitoring solutions inevitably implies extra overheads in the target system
  • It is not always easy to define formal specifications, as that process depends specialized trained engineers and typically a deep knowledge of the underlying theories supporting those formal specifications.
  • A black box specification requires to know how an internal condition (e.g., a fault) manifests itself on the observable interface of the system/component. Assumption-based RV requires a formal specification of the assumption.

[RVF1] Falcone, Y., Havelund, K., Reger, G. “A tutorial on runtime verification.” InEngineering Dependable Software Systems, 34:141–175, 2013. doi: 10.3233/978-1-61499-207-3-141

[RVF2] Havelund, K., Goldberg, A. “Verify your runs.” In Proceedings of the first IFIP TC 2/WG 2.3 Conference on verified software: theories, tools, experiments (VSTTE’05), volume 4171 of LNCS. Springer, pp 374–383

[RVF3] Leucker, M., Schallhart, C. “A brief account of runtime verification.” JLAMP, 78(5):293–303, 2009

[RVF4] Bartocci, E., Falcone, Y. (eds) “Lectures on runtime verification—introductory and advanced topics.”, volume 10457 of lecture notes in computer science. Springer, 2018

[RVF5] Pnueli, A. “The Temporal Logic of Programs”. In FOCS, pages 46–57, 1977

[RVF6] Bauer, A., Leucker, M., Schallhart, C. “Runtime Verification for LTL and TLTL.” In ACM Transactions on Software Engineering and Methodology, 20(4):14–64, September 2011. doi: 10.1145/2000799.2000800

[RVF7] Maler, O., Nickovic, D., Pnueli, A. “From MITL to timed automata.” In Proceedings of the 4th International Conference on Formal Modelling and Analysis of Timed Systems (FORMATS’06), 2006, pp. 274–289

[RVF8] Maler, O., Nickovic, D. “Monitoring temporal properties of continuous signals.” In Formal Techniques, Modelling and Analysis of Timed and Fault-Tolerant Systems (FORMATS’04), 2004, pp. 152–166

[RVF9] Cimatti, A., Tian, C., Tonetta, S.: “Assumption-Based Runtime Verification with Partial Observability and Resets.” In Runtime Verification (RV 2019), pp. 165–184. Springer. RVFdoi: 10.1007/978-3-030-32079-9_10

[RVF10] Medhat, R. et al. “Runtime Monitoring of Cyber-Physical Systems Under Timing and Memory Constraints.” In ACM Trans. Embed. Comput. Syst. Vol. 14, no. 4 (Oct. 2015), 79:1–79:29.

[RVF11] Cassar, I. et al. “A Survey of Runtime Monitoring Instrumentation Techniques.” In Electronic Proceedings in Theoretical Computer Science vol. 254 (Aug. 2017), pp. 15–28.

[RVF12] Khan, M. T., Serpanos, D., Shrobe, H. “A rigorous and efficient run-time security monitor for real-time critical embedded system applications.” In 2016 IEEE 3rd World Forum on Internet of Things (WF-IoT), 100–105. December 2016.

[RVF13] Pedro, A., Leucker, M., Pereira, D., Pinto, J.S. “Real-time MTL with durations as SMT with applications to schedulability analysis.” In TASE 2020 14th International Symposium on Theoretical Aspects of Software Engineering, December 11-13, 2020, Hangzhou, China

Method Dimensions
In-the-lab environment, Open evaluation environment, Closed evaluation environment
Experimental - Monitoring, Analytical - Formal
Hardware, Model, Software
Operation, System testing, System Design
Thinking, Acting, Sensing
Non-Functional - Safety, Functional
V&V process criteria, SCP criteria
Relations
Contents

There are currently no items in this folder.