Curve your Enthusiasm: Concurvity Regularization in Differentiable Generalized Additive Models

Published: 20 Jun 2023, Last Modified: 19 Jul 2023IMLH 2023 PosterEveryoneRevisionsBibTeX
Keywords: Interpretable Machine Learning, Generalized Additive Models, Concurvity, Multicollinearity, Regularization, Time-Series Forecasting, Interpretability
TL;DR: We address concurvity, the non-linear equivalent of multicollinearity, in Generalized Additive Models (GAMs) by proposing a novel regularizer that effectively reduces the concurvity, enhancing interpretability without compromising prediction quality.
Abstract: Generalized Additive Models (GAMs) have recently experienced a resurgence in popularity, particularly in high-stakes domains such as healthcare. GAMs are favored due to their interpretability, which arises from expressing the target value as a sum of non-linear functions of the predictors. Despite the current enthusiasm for GAMs, their susceptibility to concurvity - i.e., (possibly non-linear) dependencies between the predictors - has hitherto been largely overlooked. Here, we demonstrate how concurvity can severly impair the interpretability of GAMs and propose a remedy: a conceptually simple, yet effective regularizer which penalizes pairwise correlations of the non-linearly transformed feature variables. This procedure is applicable to any gradient-based fitting of differentiable additive models, such as Neural Additive Models or NeuralProphet, and enhances interpretability by eliminating ambiguities due to self-canceling feature contributions. We validate the effectiveness of our regularizer in experiments on synthetic as well as real-world datasets for time-series and tabular data. Our experiments show that concurvity in GAMs can be reduced without significantly compromising prediction quality, improving interpretability and reducing variance in the feature importances.
Submission Number: 54
Loading