Keywords: Universal Approximation Theory, Rational Activation Functions, Scientific Machine Learning, Function Approximation, Symbolic Reasoning
Abstract: We study a shallow variant of XNet, a neural architecture whose activation
functions are derived from the Cauchy integral formula. While prior work
focused on deep variants, we show that even a single-layer XNet exhibits
near-exponential approximation rates—exceeding the polynomial bounds of
MLPs and spline-based networks such as Kolmogorov–Arnold Networks (KANs).
Empirically, XNet reduces approximation error by over 600× on discontinuous
functions, achieves up to 20,000× lower residuals in physics-informed PDEs,
and improves policy accuracy and sample efficiency in PPO-based reinforcement
learning—while maintaining comparable or better computational efficiency
than KAN baselines.
These results demonstrate that expressive approximation can stem from
principled activation design rather than depth alone, offering a compact,
theoretically grounded alternative for function approximation, scientific
computing, and control.
Supplementary Material: zip
Primary Area: Theory (e.g., control theory, learning theory, algorithmic game theory)
Submission Number: 26358
Loading