NODE-GAM: Neural Generalized Additive Model for Interpretable Deep LearningDownload PDF

Published: 28 Jan 2022, Last Modified: 04 May 2025ICLR 2022 SpotlightReaders: Everyone
Keywords: Generalized Additive Model, Deep Learning Architecture, Interpretability
Abstract: Deployment of machine learning models in real high-risk settings (e.g. healthcare) often depends not only on the model's accuracy but also on its fairness, robustness, and interpretability. Generalized Additive Models (GAMs) are a class of interpretable models with a long history of use in these high-risk domains, but they lack desirable features of deep learning such as differentiability and scalability. In this work, we propose a neural GAM (NODE-GAM) and neural GA$^2$M (NODE-GA$^2$M) that scale well and perform better than other GAMs on large datasets, while remaining interpretable compared to other ensemble and deep learning models. We demonstrate that our models find interesting patterns in the data. Lastly, we show that we are able to improve model accuracy via self-supervised pre-training, an improvement that is not possible for non-differentiable GAMs.
One-sentence Summary: We develop a deep-learning version of Generalized Additive Model (GAM) and GA2M that is accurate, scalable and interpretable.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/node-gam-neural-generalized-additive-model/code)
14 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview