Gaussian Process Regression With Interpretable Sample-Wise Feature Weights

Published: 01 Jan 2023, Last Modified: 28 Sept 2024IEEE Trans. Neural Networks Learn. Syst. 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Gaussian process regression (GPR) is a fundamental model used in machine learning (ML). Due to its accurate prediction with uncertainty and versatility in handling various data structures via kernels, GPR has been successfully used in various applications. However, in GPR, how the features of an input contribute to its prediction cannot be interpreted. Here, we propose GPR with local explanation, which reveals the feature contributions to the prediction of each sample while maintaining the predictive performance of GPR. In the proposed model, both the prediction and explanation for each sample are performed using an easy-to-interpret locally linear model. The weight vector of the locally linear model is assumed to be generated from multivariate Gaussian process priors. The hyperparameters of the proposed models are estimated by maximizing the marginal likelihood. For a new test sample, the proposed model can predict the values of its target variable and weight vector, as well as their uncertainties, in a closed form. Experimental results on various benchmark datasets verify that the proposed model can achieve predictive performance comparable to those of GPR and superior to that of existing interpretable models and can achieve higher interpretability than them, both quantitatively and qualitatively.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview