PCA Subspaces Are Not Always Optimal for Bayesian LearningDownload PDF

09 Oct 2021, 14:49 (modified: 10 Oct 2021, 16:32)NeurIPS 2021 Workshop DistShift PosterReaders: Everyone
Keywords: Bayesian Inference, Subspace Inference, Principal Component Analysis, Random Projection
Abstract: Bayesian Neural Networks are often sought after for their strong and trustworthy predictive power. However, inference in these models is often computationally expensive and can be reduced using dimensionality reduction where the key goal is to find an appropriate subspace in which to perform the inference, while retaining significant predictive power. In this work, we propose a theoretical comparative study of the Principal Component Analysis versus the random projection for Bayesian Linear Regression. We find that the PCA is not always the optimal dimensionality reduction method and that the random projection can actually be superior, especially in cases where the data distribution is shifted and the labels have a small norm. We then confirm these results experimentally. Therefore, this work suggests to consider dimension reduction by random projection for Bayesian inference when noisy data are expected.
1 Reply