Safe Neurosymbolic Learning with Differentiable Symbolic ExecutionDownload PDF

29 Sept 2021, 00:31 (edited 15 Mar 2022)ICLR 2022 PosterReaders: Everyone
  • Keywords: Verified Learning, Neurosymbolic Programs, Safe Learning, Symbolic Execution
  • Abstract: We study the problem of learning verifiably safe parameters for programs that use neural networks as well as symbolic, human-written code. Such neurosymbolic programs arise in many safety-critical domains. However, because they need not be differentiable, it is hard to learn their parameters using existing gradient-based approaches to safe learning. Our method, Differentiable Symbolic Execution (DSE), samples control flow paths in a program, symbolically constructs worst-case "safety loss" along these paths, and backpropagates the gradients of these losses through program operations using a generalization of the REINFORCE estimator. We evaluate the method on a mix of synthetic tasks and real-world benchmarks. Our experiments show that DSE significantly outperforms the state-of-the-art DiffAI method on these tasks.
  • One-sentence Summary: We present DSE, the first approach to worst-case-safe parameter learning for potentially non-differentiable neurosymbolic programs where we bridge symbolic execution and stochastic gradient estimator to learn the loss of safety properties.
29 Replies