Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Bayesian Convolutional Neural Networks with Bernoulli Approximate Variational Inference
Yarin Gal, Zoubin Ghahramani
Feb 16, 2016 (modified: Feb 16, 2016)ICLR 2016 workshop submissionreaders: everyone
Abstract:Convolutional neural networks (CNNs) work well on large datasets. But labelled
data is hard to collect, and in some applications larger amounts of data are not
available. The problem then is how to use CNNs with small data – as CNNs
overfit quickly. We present an efficient Bayesian CNN, offering better robustness
to over-fitting on small data than traditional approaches. This is by placing a
probability distribution over the CNN’s kernels. We approximate our model’s intractable
posterior with Bernoulli variational distributions, requiring no additional
On the theoretical side, we cast dropout network training as approximate inference
in Bayesian neural networks. This allows us to implement our model using existing
tools in deep learning with no increase in time complexity, while highlighting a
negative result in the field. We show a considerable improvement in classification
accuracy compared to standard techniques and improve on published state-of-the-art
results for CIFAR-10.
Enter your feedback below and we'll get back to you as soon as possible.