Keywords: uncertainty estimation, robustness, risk-aware ML
Abstract: The modern pervasiveness of large-scale deep neural networks (NNs) is driven by their extraordinary performance on complex problems but is also plagued by their sudden, unexpected, and often catastrophic failures, particularly on challenging scenarios. Unfortunately, existing algorithms to achieve risk-awareness of NNs are complex and ad-hoc. Specifically, these methods require significant engineering changes, are often developed only for particular settings, and are not easily composable. Here we present Capsa, a flexible framework for extending models with risk-awareness. Capsa provides principled methodology for quantifying multiple forms of risk and composes different algorithms together to quantify different risk metrics in parallel. We validate Capsa by implementing state-of-the-art uncertainty estimation algorithms within the Capsa framework and benchmarking them on complex perception datasets. Furthermore, we demonstrate the ability of Capsa to easily compose aleatoric uncertainty, epistemic uncertainty, and bias estimation together in a single function set, and show how this integration provides a comprehensive awareness of NN risk.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Infrastructure (eg, datasets, competitions, implementations, libraries)
TL;DR: A simple, scalable, and composable framework for creating risk-aware models
6 Replies
Loading