Abstract: Generalized Additive Models (GAM) and Neural Additive Models (NAM) have gained a lot of attention for addressing trade-offs between accuracy and interpretability of machine learning models. Yet, these models underperform when the data has multiple subpopulations with distinctive relationships between features and outputs. The main reason behind this limitation is that these models collapse multiple relationships by being forced to fit the data in a unimodal fashion. Here we propose a Mixture of Neural Additive Models (MNAM) to overcome these limitations. The proposed MNAM learns relationships between features and outputs in a multimodal fashion and assigns a probability to each mode. Based on a subpopulation, MNAM will activate one or more matching modes by increasing their probability. Thus, the objective of MNAM is to learn multiple relationships and activate the right relationships by automatically identifying subpopulations of interest. Similar to how GAM and NAM have fixed relationships between features and outputs, MNAM will maintain interpretability by having multiple fixed relationships. We demonstrate how the proposed MNAM balances between rich representations and interpretability with numerous empirical observations and pedagogical studies.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: 1. Edited abstract and introduction to emphasize the novelty of the paper.
2. Edited Section 2 to clarify motivations for parts in methods and added Algorithm 1 to clarify how the model makes prediction.
3. Updated Section 2.2 to compare two different training algorithms.
4. Updated Section 3.1.3 and Table 1 to include results for probabilistic Neural Additive Model.
5. Updated Section 3.1.4 and added Figure 5 to show one more use case for MNAM.
6. Updated Section 4 and Section 5 to address the reviewer's comments.
7. Moved the soft thresholding section into the main text of Section 2
Assigned Action Editor: ~Yingzhen_Li1
Submission Number: 1018
Loading