Constructing Machine-Precision Neural Networks with Quasi-Interpolants

Published: 03 Mar 2026, Last Modified: 03 Mar 2026ICLR 2026 Workshop FM4Science PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: machine precision, precision, interpolation, quasi-interpolation, scientific machine learning, approximation theory, numerical analysis
TL;DR: We show how to construct and train MLPs that achieve machine-precision interpolation with geometric convergence rates using quasi-interpolation theory.
Abstract: Neural networks struggle to train to machine precision for simple interpolation tasks, limiting their use in scientific computing pipelines. We address this by providing the first explicit MLP construction that provably achieves machine-precision interpolation with $\log(1/\varepsilon)$ parameter scaling --- matching classical polynomial methods --- while remaining implementable in floating-point arithmetic. Our construction, based on quasi-interpolation theory, reveals a critical bandwidth parameter $\lambda$ that controls a tradeoff between aliasing error and conditioning; setting the optimal $\lambda$ implies weight magnitudes must grow with width. Using this framework to analyze trained MLPs, we find that they do not maintain the required weight scaling and exhibit rank saturation. Our results provide a principled framework for understanding why optimization, not expressivity, underlies precision failures in scientific machine learning.
Submission Number: 113
Loading