Abstract: Deep Ensemble (DE) is an effective and practical uncertainty quantification approach in deep learning. The uncertainty of DE is usually manifested by the functional inconsistency among the ensemble members, which, yet, originates from unmanageable randomness in the initialization and optimization of neural networks (NNs), and may easily collapse in specific cases. To tackle this issue, we advocate characterizing the functional inconsistency with the empirical covariance of the functions dictated by the ensemble members, and defining a Gaussian process (GP) with it. We perform functional variational inference to tune such a probabilistic model w.r.t. training data and specific prior beliefs. This way, we can explicitly manage the uncertainty of the ensemble of NNs. We further provide strategies to make the training efficient. The proposed approach achieves better uncertainty quantification than DE and its variants across diverse scenarios, while consuming only marginally added training cost compared to standard DE. The code is available at https://github.com/thudzj/DE-GP.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: - Updated Table 1 to include the NLL for the classification of in-distribution data on CIFAR-10.
- Added the results of distinguishing Fashion-MNIST from MNIST and CIFAR-10 from SVHN in Table 4 of Appendix A.3.2.
- Included experiments on the large-scale TinyImageNet in Table 6.
- Revised the arguments concerning deep ensemble-based baselines in Section 5.
- Discussed related work on FVI by Rudner et al. (2022) in Section 2.
- Clarified that deep ensemble is a practical method for uncertainty quantification in Sections 1 and 2.
Code: https://github.com/thudzj/DE-GP
Supplementary Material: zip
Assigned Action Editor: ~Yingzhen_Li1
Submission Number: 2707
Loading