Keywords: deep learning, deep neural networks, early exiting, uncertainty quantification
TL;DR: We propose a Dirichlet-based framework to directly quantify model uncertainty and improve early exiting of ambiguous data in deep neural networks.
Abstract: Deep neural networks are renowned for their accuracy across a spectrum of machine learning tasks but often suffer from prolonged inference time due to their depth. Early exiting strategies have been proposed to mitigate this by allowing predictions to output at intermediate layers. However, we observe that using total uncertainty as the exiting criterion does not consistently reflect true model uncertainty, causing traditional methods to prevent early exits for ambiguous data even when model uncertainty is low. To address this limitation, we propose a Dirichlet-based framework to directly quantify model uncertainty. Models trained with our approach demonstrate more balanced handling of both ambiguous and unambiguous data, enabling a higher proportion of ambiguous samples to exit early.
Submission Number: 80
Loading