Figure - available via license: Creative Commons Attribution 4.0 International
Content may be subject to copyright.
Dynamic data structure verification in CBMC: TFB versus Build* and Gen&Filter. Verification and solving times in seconds, clauses and variables in thousands
Source publication
Software model checkers are able to exhaustively explore different bounded program executions arising from various sources of non-determinism. These tools provide statements to produce non-deterministic values for certain variables, thus forcing the corresponding model checker to consider all possible values for these during verification. While the...
Contexts in source publication
Context 1
... was set for these experiments to 1 hour. Table 1 reports, for the most relevant routines of each of the data structures in our benchmark, the verification running times with the underlying decision procedure running times discriminated in seconds, as well as the number of clauses and variables (expressed in thousands) in the CNF formulas corresponding to each of the verification tasks, for several scopes (S). Since we checked whether the routines preserved the corresponding structure's invariant, we did not consider for the experiments those routines that did not modify the structure (these trivially preserve the invariant). ...Context 2
... remarks on the results are in order. Table 1 shows that in all analyzed routines, the TFB approach allowed us to analyze larger scopes for which the other input generation techniques exhausted the allotted time or memory. TFB was able to analyze larger scopes than Gen&Filter in 7 out of 12 cases (remarkably, by at least 6 in AList, at least 3 in CList and at least 2 in AVL), and in 8 out of 12 cases with respect to Build* (by at least 4 in all 8 cases). ...Citations
... Preprocessing techniques to speed up program analysis by excluding invalid or infeasible values have also been proposed in the context of symbolic model checking. For instance, bounded model checking of programs featuring dynamic data structures may get more efficient by precomputing tight field bounds based on the structures' type invariants [42]. ...
Collective adaptive systems may be broadly defined as ensembles of autonomous agents, whose interaction may lead to the emergence of global features and patterns. Formal verification may provide strong guarantees about the emergence of these features, but may suffer from scalability issues caused by state space explosion. Compositional verification techniques, whereby the state space of a system is generated by combining (an abstraction of) those of its components, have shown to be a promising countermeasure to the state space explosion problem. Therefore, in this work we apply these techniques to the problem of verifying collective adaptive systems with stigmergic interaction. Specifically, we automatically encode these systems into networks of LNT processes, apply a static value analysis to prune the state space of individual agents, and then reuse compositional verification procedures provided by the CADP toolbox. We demonstrate the effectiveness of our approach by verifying a collection of representative systems.
We propose a family of logical theories for capturing an abstract notion of consistency and show how to build a generic and efficient theory solver that works for all members in the family. The theories can be used to model the influence of memory consistency models on the semantics of concurrent programs. They are general enough to precisely capture important examples like TSO, POWER, ARMv8, RISC-V, RC11, IMM, and the Linux kernel memory model. To evaluate the expressiveness of our theories and the performance of our solver, we integrate them into a lazy SMT scheme that we use as a backend for a bounded model checking tool. An evaluation against related verification tools shows, besides flexibility, promising performance on challenging programs under complex memory models.