Keywords: Neurosymbolic Program Analysis, Logic-guided Learning
TL;DR: We train neural networks to serve as abstract transformers using supervised and unsupervised learning, automating the generation of efficient, differentiable transformers for abstract interpretation, enabling learning-driven program analysis.
Abstract: Abstract interpretation is a widely used method for the formal analysis and verification of programs and neural networks. However, designing efficient abstract transformers for widely-used expressive domains such as Octagon and Polyhedra is challenging as one needs to carefully balance the fundamental trade-off between the cost, soundness, and precision of the transformer for downstream tasks. Further, scalable implementations involve intricate performance optimizations like Octagon and Polyhedra decomposition. Given the inherent complexity of abstract transformers and the proven capability of neural networks to effectively approximate complex functions, we envision and propose the concept of $\textit{Neural Abstract Transformers}$: neural networks that serve as abstract transformers. The proposed Neural Abstract Interpretation ($\texttt{NAI}$) framework provides supervised and unsupervised methods to learn efficient neural transformers $\textit{automatically}$, which reduces development costs. We instantiate the $\texttt{NAI}$ framework for two widely used numerical domains: Interval and Octagon. Evaluations on these domains demonstrate the effectiveness of the $\texttt{NAI}$ framework to learn sound and precise neural transformers. An added advantage of our technique is that neural transformers are differentiable, unlike hand-crafted alternatives. As an example, we showcase how this differentiability allows framing invariant generation as a learning problem, enabling neural transformers to generate valid octagonal invariants for a program.
Submission Number: 33
Loading