On the Depth of Monotone ReLU Neural Networks and ICNNs

Published: 08 May 2025, Last Modified: 15 May 2025OpenReview Archive Direct UploadEveryoneCC BY 4.0
Abstract:

We study two models of ReLU neural networks: monotone networks (ReLU+) and input convex neural networks (ICNN). Our focus is on expressivity, mostly in terms of depth, and we prove the following lower bounds. For the maximum function MAXn computing the maximum of n real numbers, we show that ReLU+ networks cannot compute MAXn, or even approximate it. We prove a sharp n lower bound on the ICNN depth complexity of MAXn. We also prove depth separations between ReLU networks and ICNNs; for every k, there is a depth-2 ReLU network of size O(k^2) that cannot be simulated by a depth-k ICNN. The proofs are based on deep connections between neural networks and polyhedral geometry, and also use isoperimetric properties of triangulations.

Loading