PCA Subspaces Are Not Always Optimal for Bayesian LearningDownload PDF

Published: 02 Dec 2021, Last Modified: 05 May 2023NeurIPS 2021 Workshop DistShift PosterReaders: Everyone
Keywords: Bayesian Inference, Subspace Inference, Principal Component Analysis, Random Projection
Abstract: Bayesian Neural Networks are often sought after for their strong and trustworthy predictive power. However, inference in these models is often computationally expensive and can be reduced using dimensionality reduction where the key goal is to find an appropriate subspace in which to perform the inference, while retaining significant predictive power. In this work, we propose a theoretical comparative study of the Principal Component Analysis versus the random projection for Bayesian Linear Regression. We find that the PCA is not always the optimal dimensionality reduction method and that the random projection can actually be superior, especially in cases where the data distribution is shifted and the labels have a small norm. We then confirm these results experimentally. Therefore, this work suggests to consider dimension reduction by random projection for Bayesian inference when noisy data are expected.
1 Reply

Loading