The polytopal complex as a framework to analyze multilayer relu networks

27 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: theory of deep learning + mlp + low dimension + polytopal complex
TL;DR: We capture the local properties of a mlp such as continuity, number of cells, and extrema by computing its polytopal cell complex.
Abstract: Neural networks have shown superior performance in many different domains. However, a precise understanding of what even simple architectures actually are doing is not yet achieved, hindering the application of such architectures in safety critical embedded systems. To improve this understanding, we think of a network as a continuous piecewise linear function. The network decomposes the input space into cells in which the network is an affine function; the resulting cells form a polytopal complex. In this paper we provide an algorithm to derive this complex. Furthermore, we capture the local and global behavior of the network by computing the maxima, minima, number of cells, local span, and curvature of the complex. With the machinery presented in this paper we can extend the validity of a neural network beyond the finite discrete test set to an open neighborhood of this test set, potentially covering large parts of the input domain. To show the effectiveness of the proposed method we run various experiments on the effects of width, depth, regularisation, and initial seed on these measures. We empirically confirm that the solution found by training is strongly influenced by weight initialization. We further find that under regularization, less cells capture more of the volume, while the total number of cells stays in the same range. At the same time the total number of cells stays in the same range. Together, these findings provide novel insights into the network and its training parameters.
Primary Area: interpretability and explainable AI
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 10614
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview