Projected Neural Additive Models as Universal Approximators

ICLR 2026 Conference Submission21550 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Neural additive models, Projected feature spaces, Universal approximators, Sparse representations, Symbolic regression
TL;DR: We prove that the projected neural additive model is a universal approximator, elucidating its ability to approximate any continuous function on a closed and bounded domain.
Abstract: This article proves that any continuous multi-variable function can be approximated arbitrarily close by a linear combination of single-variable functions of the inputs in a projected space. Using a set of independent neural networks to parameterize these feature functions of the projected inputs, we introduce their linear combination as the projected neural additive model (PNAM): an extension of the neural additive model (NAM) (cf. Agarwal et al. (2021)) that now enables universal approximation. While the couplings of the input variables bestow the PNAM with the universal approximation property, they could diminish the interpretability intrinsic to the NAM. As such, we propose regularization and post hoc techniques to promote sparse solutions and enhance the interpretability of the PNAM. The single-variable characteristic of the bases also enables us to convert them into symbolic equations and dramatically reduces the number of required parameters. We provide results from numerical experiments on invariants in knot theory, and phase field theory for fracture of brittle solids to illustrate the expressivity and interpretability of the PNAM.
Supplementary Material: zip
Primary Area: applications to physical sciences (physics, chemistry, biology, etc.)
Submission Number: 21550
Loading