A Group Variable Importance Framework for Bayesian Neural Networks

TMLR Paper857 Authors

12 Feb 2023 (modified: 05 Jun 2023)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: While the success of neural networks has been well-established across a variety of domains, our ability to interpret these methods is still limited. Traditional variable importance approaches in machine learning overcome this issue by providing local explanations about particular predictive decisions - that is, they detail how important any given feature is to the classification of a particular sample in the dataset. However, univariate mapping approaches have been shown across many applications in the literature to generate false positives and negatives in high-dimensional and collinear data settings. In this paper, we focus on the slightly different task of global interpretability where our goal is to identify important groups of variables by aggregating over collections of univariate signals to improve power and mitigate false discovery. In the context of neural networks, a feature is rarely important on its own, so our strategy is specifically designed to leverage partial covariance structures and incorporate variable interactions into our proposed group feature ranking. Here, we extend the recently proposed “RelATive cEntrality” (RATE) measure to the Bayesian deep learning setting. We refer to this approach as the “GroupRATE” criterion. Given a trained network, GroupRATE applies an information theoretic metric to the joint posterior distribution of effect sizes to assess group-level significance of features. Importantly, unlike competing approaches, our method does not require tuning parameters which can be costly and difficult to select. We demonstrate the utility of our framework on both simulated and real data.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Tongliang_Liu1
Submission Number: 857
Loading