Interaction of Generalization and Out-of-Distribution Detection Capabilities in Deep Neural Networks

Published: 01 Jan 2023, Last Modified: 15 Apr 2025ICANN (10) 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Current supervised deep learning models are shown to achieve exceptional performance when data samples used in evaluation come from a known source, but are susceptible to performance degradations when the data distribution is even slightly shifted. In this work, we study the interaction of two related aspects in this context: (1) out-of-distribution (OOD) generalization ability of DNNs to successfully classify samples from unobserved data distributions, and (2) being able to detect strictly OOD samples when observed at test-time, finding that acquisition of these two capabilities can be at odds. We experimentally analyze the impact of various training data related texture and shape biases on both abilities. Importantly, we reveal that naive outlier exposure mechanisms can help to improve OOD detection performance while introducing strong texture biases that conflict with the generalization abilities of the networks. We further explore the influence of such conflicting texture bias backdoors, which lead to unreliable OOD detection performance on spurious OOD samples observed at test-time.
Loading