NODE-GAM: Neural Generalized Additive Model for Interpretable Deep LearningDownload PDF

29 Sept 2021, 00:34 (edited 16 Mar 2022)ICLR 2022 SpotlightReaders: Everyone
  • Keywords: Generalized Additive Model, Deep Learning Architecture, Interpretability
  • Abstract: Deployment of machine learning models in real high-risk settings (e.g. healthcare) often depends not only on the model's accuracy but also on its fairness, robustness, and interpretability. Generalized Additive Models (GAMs) are a class of interpretable models with a long history of use in these high-risk domains, but they lack desirable features of deep learning such as differentiability and scalability. In this work, we propose a neural GAM (NODE-GAM) and neural GA$^2$M (NODE-GA$^2$M) that scale well and perform better than other GAMs on large datasets, while remaining interpretable compared to other ensemble and deep learning models. We demonstrate that our models find interesting patterns in the data. Lastly, we show that we are able to improve model accuracy via self-supervised pre-training, an improvement that is not possible for non-differentiable GAMs.
  • One-sentence Summary: We develop a deep-learning version of Generalized Additive Model (GAM) and GA2M that is accurate, scalable and interpretable.
  • Supplementary Material: zip
14 Replies