Keywords: deep ensembles, pac-bayes, uncertainty
TL;DR: We present theoretically motivated approach to improving deep ensembles where each ensemble member can be trained independently.
Abstract: Ensembling has proven to be a powerful technique for boosting model performance, uncertainty estimation, and robustness in supervised deep learning. We propose to improve deep ensembles by optimizing a tighter PAC-Bayesian bound than the most popular ones. Our approach has a number of benefits over previous methods: 1) it requires no communication between ensemble members during training to improve performance and is trivially parallelizable, 2) it results in a simple soft thresholding gradient update that is much simpler than alternatives. Empirically, we outperform competing approaches that try to improve ensembles by encouraging diversity. We report test accuracy gains for MLP, LeNet, and WideResNet architectures, and for a variety of datasets.
Submission Number: 32
Loading