Abstract: Deep neural networks are known to be vulnerable to unseen data: they may wrongly assign high confidence scores to out-distribution samples. Existing works try to handle the problem using intricate representation learning methods and specific metrics. In this paper, we propose a simple yet effective post-hoc out-of-distribution detection algorithm named Test Time Augmentation Consistency Evaluation~(TTACE). It is inspired by a novel observation that in-distribution data enjoy more consistent predictions for its original and augmented versions on a trained network than out-distribution data, which separates in-distribution and out-distribution samples.
Experiments on various high-resolution image benchmark datasets demonstrate that TTACE achieves comparable or better detection performance under dataset-vs-dataset anomaly detection settings with a $60\%\sim90\%$ running time reduction of existing classifier-based algorithms.
We also provide empirical verification that the key to TTACE lies in the remaining classes between augmented features, which previous works have partially ignored.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Zhiding_Yu1
Submission Number: 1342
Loading