Minimal Filling Architectures of Polynomial Neural Networks: Counterexamples, Frontier Search, and Defects

07 May 2026 (modified: 09 May 2026)ICML 2026 Workshop CoLoRAI SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Neuroalgebraic Geometry, Polynomial Neural Networks, Deep Polynomial Networks, Neruovarieties, Expressivity, Minimal Filling Architectures
TL;DR: For feedforward polynomial neural networks, we disprove the minimal unimodal conjecture of Kileel, Trager, and Bruna by using a frontier search to find a minimal filling architecture with non‑unimodal widths and large defects.
Abstract: We provide a counterexample to the minimal unimodal conjecture for polynomial neural networks (PNNs) with power activation functions. Fixing the input and output widths, the conjecture states that any minimal filling architecture has unimodal widths for the hidden layers. We found a counterexample via a frontier search and certified it using recursive dimension bounds and symbolic computation. Notably, several subarchitectures of this example exhibit large defect, in contrast with the predominantly small-defect behavior observed in prior examples.
Submission Number: 97
Loading