A Novel Architecture for Integrating Shape Constraints in Neural Networks

ICLR 2026 Conference Submission17391 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Shape constraint, Convexity, Monotonicity, Neural network, Regularization
Abstract: This research proposes COMONet (Convex-Concave and Monotonicity-Constrained Neural Networks), a novel neural network architecture designed to embed inductive biases as shape constraints—specifically, monotonicity, convexity, concavity, and their combinations—into neural network training. Unlike previous models addressing only a subset of constraints, COMONet can comprehensively integrate and enforce eight distinct shape constraints: monotonic increasing, monotonic decreasing, convex, concave, convex increasing, convex decreasing, concave increasing, and concave decreasing. This integration is achieved through a unique partially connected structure, wherein inputs are grouped and selectively connected to specialized neural units employing either exponentiated or normal weights, combined with appropriate activation functions. Depending on the shape constraint required by each input, COMONet dynamically utilizes its full architecture or a partial configuration, providing significant flexibility. We further provide theoretical guarantees ensuring the strict enforcement of these constraints, while demonstrating that COMONet achieves performance comparable to existing benchmark methods. Moreover, our numerical experiments confirm that COMONet remains robust even under noisy conditions. Together, these results underscore COMONet’s potential to advance constrained neural network training as a practical and theoretically grounded approach.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 17391
Loading