Keywords: neuro-symbolic, constraints, weighted model integration, tractable inference
TL;DR: We propose a probabilistic neurosymbolic layer that encodes a probability distribution over non-convex algebraic constraints which can be seamlessly plugged into any neural predictor and can scale efficiently with GPU-accelerated symbolic integration
Abstract: In safety-critical applications, guaranteeing the satisfaction of constraints over continuous environments is crucial, e.g., an autonomous agent should never crash over obstacles or go off-road. Neural models struggle in the presence of these constraints, especially when they involve intricate algebraic relationships. To address this, we introduce a differentiable probabilistic layer that guarantees the satisfaction of non-convex algebraic constraints over continuous variables. This probabilistic algebraic layer (PAL) can be seamlessly plugged into any neural architecture and trained via maximum likelihood without requiring approximations. PAL defines a distribution over conjunctions and disjunctions of linear inequalities, parametrized by polynomials. This formulation enables efficient and exact renormalization via symbolic integration, which can be amortized across different data points and easily parallelized on a GPU. We showcase PAL and our integration scheme on a number of benchmarks for algebraic constraint integration and on real-world trajectory data.
Latex Source Code: zip
Code Link: https://github.com/april-tools/pal
Signed PMLR Licence Agreement: pdf
Readers: auai.org/UAI/2025/Conference, auai.org/UAI/2025/Conference/Area_Chairs, auai.org/UAI/2025/Conference/Reviewers, auai.org/UAI/2025/Conference/Submission486/Authors, auai.org/UAI/2025/Conference/Submission486/Reproducibility_Reviewers
Submission Number: 486
Loading