Shaping Monotonic Neural Networks with Constrained Learning

ICLR 2026 Conference Submission11794 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: learning algorithm design, monotonic neural network, machine learning
Abstract: The monotonicity of outputs of a neural network with respect to a subset of inputs is a desirable property that provides an important tool to explore the interpretability, fairness, and generalizability of the designed models, and underlies many applications in finance, physics, engineering, and many other domains. In this paper, we propose a novel, flexible, and adaptive learning framework to induce monotonicity of neural networks with general architectures. The monotonicity serves as a constraint during the model training, which motivates us to develop a primal-dual learning algorithm to train the model. In particular, our framework provides an interface to trade off between probability of monotonicity satisfaction and overall prediction performance by introducing a chance constraint, making it more flexible for different application scenarios. The proposed algorithm needs only small extra computations to continuously and adaptively enforce the monotonicity until the constraint is satisfied. Compared to the existing methods for building monotonicity, our framework does not impose any constraints on the neural network architectures and needs no pre-processing such as tuning of the regularization. The numerical experiments in various practical tasks show that our method can achieve competitive performance over state-of-the-art methods.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 11794
Loading