ENFORCE: Nonlinear Constrained Learning with Adaptive-depth Neural Projection

ICLR 2026 Conference Submission24866 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Constrained learning, Hard-constrained neural networks, Proxy optimization, Trustworthy AI, Physics-informed machine learning
TL;DR: ENFORCE is a neural network architecture that uses an adaptive projection module (AdaNP) to enforce nonlinear equality constraints in predictions, improving safety, accuracy, and efficiency in optimization and regression tasks.
Abstract: Ensuring neural networks adhere to domain-specific constraints is crucial for addressing safety and ethical concerns while also enhancing inference accuracy. Despite the nonlinear nature of most real-world tasks, existing methods are predominantly limited to affine or convex constraints. We introduce ENFORCE, a neural network architecture that uses an adaptive projection module (AdaNP) to enforce nonlinear equality constraints in the predictions. We mathematically prove that our projection mapping is 1-Lipschitz under mild assumptions, making it well-suited for stable training. We evaluate ENFORCE on multiple tasks, including function fitting, a real-world engineering simulation, and learning optimization problems. For the latter, we introduce a class of scalable optimization problems as a benchmark for nonlinear constrained learning. The predictions of our new architecture satisfy $N_C$ equality constraints that are nonlinear in both the inputs and outputs of the neural network, while maintaining scalability with a tractable computational complexity of $\mathcal{O}(N_C^3)$ at training and inference time.
Supplementary Material: zip
Primary Area: optimization
Submission Number: 24866
Loading