Abstract: Interpretability of machine learning model predictions becomes more important as the industry moves towards safe and reliable deployment of automated systems that utilize predictive models. Obtaining posterior predictive distributions provides a major step towards interpretability goals enabling answering questions like “How confident is my model in its prediction?”. In modern statistics, Variational Inference (VI) is considered an efficient alternative to Markov chain Monte Carlo methods for finding the posterior densities. Despite enjoying computational gains through optimization, VI can still be expensive when applied to the large models and datasets arising in many industry applications, including Bayesian Neural Networks (BNNs). As a result, most practical applications of VI in BNN focus on mean-field VI. However, mean-field VI can be limiting considering the connected nature of neural networks in terms of capturing the dependency structure in these models. In this work, we present Rank-1 update of Bayesian Learning Rule (RouBL). RouBL is a simple and computationally efficient algorithm to learn the covariance of latent variables when performing Bayesian inference. Our initial analysis using the UCI datasets show promising results, efficiently capturing the distributional information compared to a mean-field VI benchmark.
0 Replies
Loading