Learning on a Razor’s Edge: Identifiability and Singularity of Polynomial Neural Networks

ICLR 2026 Conference Submission20337 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: identifiability, singularities, critical points, neuromanifolds, polynomial activation, algebraic geometry
TL;DR: We discuss identifiability of MLPs and CNNs with a generic polynomial activation, and relate the singularities of their neuromanifolds to subnetworks and sparsity bias.
Abstract: We study function spaces parametrized by neural networks, referred to as neuromanifolds. Specifically, we focus on deep Multi-Layer Perceptrons (MLPs) and Convolutional Neural Networks (CNNs) with an activation function that is a sufficiently generic polynomial. First, we address the identifiability problem, showing that, for almost all functions in the neuromanifold of an MLP, there exist only finitely many parameter choices yielding that function. For CNNs, the parametrization is generically one-to-one. As a consequence, we compute the dimension of the neuromanifold. Second, we describe singular points of neuromanifolds. We characterize singularities completely for CNNs, and partially for MLPs. In both cases, they arise from sparse subnetworks. For MLPs, we prove that these singularities often correspond to critical points of the mean-squared error loss, which does not hold for CNNs. This provides a geometric explanation of the sparsity bias of MLPs. All of our results leverage tools from algebraic geometry.
Primary Area: learning theory
Submission Number: 20337
Loading