Keywords: Neural Model Checking, Inductive Invariants, Safety Verification
Abstract: Safety verification ensures that a system avoids undesired behaviour.
Liveness complements safety, ensuring that the system also achieves its
desired objectives. A complete specification of functional correctness must
combine both safety and liveness. Proving with mathematical
certainty that a system satisfies a safety property demands presenting an
appropriate inductive invariant of the system, whereas proving liveness
requires showing a measure of progress witnessed by a ranking function.
Neural model checking has recently introduced a data-driven approach to the
formal verification of reactive systems, albeit focusing on ranking
functions and thus addressing liveness properties only. In this paper, we
extend and generalise neural model checking to additionally encompass
inductive invariants and thus safety properties as well. Given a system and
a linear temporal logic specification of safety and liveness, our approach
alternates a learning and a checking component towards the construction of a
provably sound neural certificate. Our new method introduces a neural
certificate architecture that jointly represents inductive invariants as
proofs of safety, and ranking functions as proofs of liveness. Moreover,
our new architecture is amenable to training using constraint solvers,
accelerating prior neural model checking work otherwise based on gradient
descent. We experimentally demonstrate that our method is orders of
magnitude faster than the state-of-the-art model checkers on pure liveness
and combined safety and liveness verification tasks written in
SystemVerilog, while enabling the verification of richer properties than was
previously possible for neural model checking.
Supplementary Material: zip
Primary Area: Machine learning for sciences (e.g. climate, health, life sciences, physics, social sciences)
Submission Number: 23428
Loading