Constrained Monotonic Neural NetworksDownload PDF

16 May 2022 (modified: 12 Mar 2024)NeurIPS 2022 SubmittedReaders: Everyone
Keywords: Neural Network, Monotonicity, Deep Learning
TL;DR: We propose a simple and an elegant method to enforce monotonicity in neural networks.
Abstract: Deep neural networks are becoming increasingly popular in approximating arbitrary functions from noisy data. But wider adoption is being hindered by the need to explain such models and to impose additional constraints on them. Monotonicity constraint is one of the most requested properties in real-world scenarios and is the focus of this paper. One of the oldest ways to construct a monotonic fully connected neural network is to constrain its weights to be non-negative while employing a monotonic activation function. Unfortunately, this construction does not work with popular non-saturated activation functions such as ReLU, ELU, SELU etc, as it can only approximate convex functions. We show this shortcoming can be fixed by employing the original activation function for a part of the neurons in the layer, and employing its point reflection for the other part. Our experiments show this approach of building monotonic deep neural networks have matching or better accuracy when compared to other state-of-the-art methods such as deep lattice networks or monotonic networks obtained by heuristic regularization. This method is the simplest one in the sense of having the least number of parameters, not requiring any modifications to the learning procedure or post-learning steps. Finally, we give a proof it can approximate any continuous monotone function on a compact subset of $\mathbb{R}^n$.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2205.11775/code)
19 Replies

Loading