Keywords: Bayesian Deep Learning, Data Augmentation, Bayesian Neural Networks, Invariant Neural Networks
TL;DR: We propose a principled method to incorporate data augmentation into BNN training, but find that the cold posterior effect persists.
Abstract: Bayesian neural networks that incorporate data augmentation implicitly use a "randomly perturbed log-likelihood [which] does not have a clean interpretation as a valid likelihood function" (Izmailov et al. 2021). Here, we provide several approaches to developing principled Bayesian neural networks incorporating data augmentation. We introduce a "finite orbit" setting which allows valid likelihoods to be computed exactly, and for the more usual "full orbit" setting we derive multi-sample bounds tighter than those used previously for Bayesian neural networks with data augmentation. These models cast light on the origin of the cold posterior effect. In particular, we find that the cold posterior effect persists even in these principled models incorporating data augmentation. This suggests that the cold posterior effect cannot be dismissed as an artifact of data augmentation using incorrect likelihoods.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/data-augmentation-in-bayesian-neural-networks/code)
5 Replies
Loading