Keywords: machine precision, precision, interpolation, quasi-interpolation, scientific machine learning, approximation theory, numerical analysis
TL;DR: We show how to construct and train MLPs that achieve machine-precision interpolation with geometric convergence rates using quasi-interpolation theory.
Abstract: Neural networks often plateau far above machine precision, limiting their use in scientific computing pipelines. A central question is whether this reflects an expressivity limit or a failure of optimization. In the interpolation setting, we show that optimization is the primary bottleneck by constructing the first explicit MLP interpolant that provably achieves machine-precision accuracy with $\log(1/\varepsilon)$ parameter scaling while remaining implementable in floating-point arithmetic. Our construction, based on quasi-interpolation theory, exposes a dimensionless bandwidth parameter $\lambda$ that controls the tradeoff between approximation error and numerical stability. Comparing this construction to trained MLPs, we find that optimization drives $\lambda \to 0$, causing the network to collapse to an overly narrow length-scale regime and utilize capacity redundantly, even though the quasi-interpolant itself remains reasonably conditioned. These results provide a principled lens on precision failures in scientific machine learning.
Journal Opt In: No, I do not wish to participate
Submission Number: 43
Loading