(Implicit) Ensembles of Ensembles: Epistemic Uncertainty Collapse in Large Models

Published: 28 May 2025, Last Modified: 28 May 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Authors that are also TMLR Expert Reviewers: ~Andreas_Kirsch1
Abstract: Epistemic uncertainty is crucial for safety-critical applications and data acquisition tasks. Yet, we find an important phenomenon in deep learning models: an epistemic uncertainty collapse as model complexity increases, challenging the assumption that larger models invariably offer better uncertainty quantification. We introduce implicit ensembling as a possible explanation for this phenomenon. To investigate this hypothesis, we provide theoretical analysis and experiments that demonstrate uncertainty collapse in explicit ensembles of ensembles and show experimental evidence of similar collapse in wider models across various architectures, from simple MLPs to state-of-the-art vision models including ResNets and Vision Transformers. We further develop implicit ensemble extraction techniques to decompose larger models into diverse sub-models, showing we can thus recover epistemic uncertainty. We explore the implications of these findings for uncertainty estimation.
Certifications: Expert Certification
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/BlackHC/2409.02628
Assigned Action Editor: ~Mark_Coates1
Submission Number: 3289
Loading