Beyond Visual cues: Harnessing Text signal for Test-Time OOD Detection

ICLR 2026 Conference Submission19318 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Out-of-Distribution detection, Image classification, Test-time adaptation
Abstract: Out-of-Distribution (OOD) detection has become increasingly critical for deploying reliable machine learning systems in open-world environments. While vision-language models (VLMs) like CLIP demonstrate strong potential for OOD detection, most existing test-time OOD detection methods focus on storing representative visual features, leaving the textual modality's adaptation potential largely unexplored. In this work, we investigate whether text-side adaptation can improve test-time OOD detection. To this end, we propose Test-time Textual OOD Discovery (TTOD), a framework that harnesses semantic knowledge directly from the test data stream with an unknown distribution. Our method progressively constructs a retrievable OOD textual knowledge bank by continuously updating OOD prompts during testing under the guidance of pseudo labels from a base detector. To alleviate the impact of contaminated signals, the method further develops a purification strategy that exploits clustering properties of similar OOD types to separate ID samples misclassified as OOD by the base detector from OOD samples, thereby improving pseudo-label quality for more effective adaptation. Extensive experiments on two standard benchmarks with nine OOD datasets demonstrate that TTOD consistently achieves state-of-the-art performance, highlighting the value of textual intervention for robust test-time OOD detection.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 19318
Loading