Sparse Deep Additive Model with Interactions: Enhancing Interpretability and Predictability

19 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Interpretable deep learning, feature selection, High-dimensional data analysis
TL;DR: Sparse Deep Additive Model with Interactions (SDAMI)
Abstract: Recent advances in deep learning highlight the need for personalized models that can learn from small or moderate samples, handle high-dimensional features, and remain interpretable. To address this challenge, we propose the Sparse Deep Additive Model with Interactions (SDAMI), a framework that combines sparsity-driven feature selection with deep subnetworks for flexible function approximation. Unlike conventional deep learning models, which often function as black boxes, SDAMI explicitly disentangles main effects and interaction effects to enhance interpretability. At the same time, its deep additive structure achieves higher predictive accuracy than classical additive models. Central to SDAMI is the concept of an Effect Footprint, which assumes that higher-order interactions project marginally onto main effects. Leveraging this principle, SDAMI employs a three-stage strategy to circumvent the search complexity inherent in direct interaction screening: first, identify strong main effects that implicitly carry information about important interactions; second, exploit this information—through structured regularization such as group lasso—to distinguish genuine main effects from interaction effects; third, build subnetwork for identified main effect and interaction. For each selected main effect, SDAMI constructs a dedicated subnetwork, enabling nonlinear function approximation while preserving interpretability and providing a structured foundation for modeling interactions. Extensive simulations and applications with comparisons confirm SDAMI’s in reliability analysis, neuroscience, and medical diagnostics further demonstrate SDAMI's versatility in recovering effect structures across diverse scenarios and addressing real-world high-dimensional modeling challenges.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Submission Number: 21420
Loading