Abstract: We review generalized additive models as a type of ‘transparent’ model that has recently seen renewed interest in the deep learning community as _neural additive models_. We highlight multiple types of nonidentifiability in this model class and discuss challenges in interpretability, arguing for restraint when claiming ‘interpretability’ or ‘suitability for safety-critical applications’ of such models.
Loading