Combinatorial Capacity of modReLU Complex Networks: VC-Dimension Bounds and Lower Limits

TMLR Paper6629 Authors

24 Nov 2025 (modified: 26 Nov 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Complex-valued neural networks (CVNNs) are increasingly used in settings where both magnitude and phase of the signal carry information. In particular, deep networks with the modReLU activation function have become standard in applications such as MRI reconstruction, radar, and complex-valued time-series modeling. While approximation properties of such networks have recently been analyzed in detail, their statistical capacity in the sense of VC-dimension has not, to the best of our knowledge, been studied. In this paper we formalize a natural class of fully connected deep complex-valued networks with modReLU activation and real sign output, and view them as binary classifiers on $\mathbb{R}^{2d}$ via the usual realification. Using tools from real algebraic geometry and a VC-dimension bound for semi-algebraic concept classes due to Goldberg and Jerrum, together with quantitative bounds for quantifier elimination, we prove that for any architecture with $W$ real parameters and depth $L$, the VC-dimension of the corresponding hypothesis class is at most on the order of $W^2 \log W$, with a universal constant independent of the particular architecture. On the other hand, by restricting to real inputs and parameters and exploiting results of Harvey, Liaw, and Mehrabian and of Bartlett et al. on deep networks with piecewise-linear activations, we obtain lower bounds of order $WL \log(W/L)$ for suitable depth-$L$ architectures within the modReLU class. Thus the VC-dimension of these networks grows at least linearly in both $W$ and $L$, and at most quadratically in $W$ up to a logarithmic factor. Closing this gap is an interesting open problem.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Moshe_Eliasof1
Submission Number: 6629
Loading