Keywords: ReLU neural networks, computational complexity, parameterized complexity, verification, computational geometry
Abstract: Neural networks with ReLU activation play a key role in modern machine learning. Understanding the functions represented by ReLU networks is a major topic in current research as this enables a better interpretability of learning processes.
Injectivity plays a crucial role whenever invertibility of a neural network is necessary, such as, e.g., for inverse problems or generative models. The exact computational complexity of deciding injectivity was recently posed as an open problem (Puthawala et al. [JMLR 2022]).
We answer this question by proving coNP-completeness. On the positive side, we show that the problem for a single ReLU layer is still tractable for small input dimension; more precisely, we present a parameterized algorithm which yields fixed-parameter tractability with
respect to theinput dimension.
In addition, we study the network verification problem which is of great importance since neural networks are increasingly used in safety-critical systems. We prove that network verification is coNP-hard for a general class of input domains. Our result thus highlights that the hardness of network verification is intrinsic to the ReLU networks themselves, rather than specific input domains. In this context, we also characterize surjectivity for ReLU networks with one-dimensional output which turns out to be the complement of a basic network verification task. We reveal interesting connections to computational convexity byformulating the surjectivity problem as a zonotope containment problem.
Primary Area: learning theory
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 11257
Loading