Prompt Optimization Meets Subspace Representation Learning for Few-shot Out-of-Distribution Detection

Published: 09 Jul 2025, Last Modified: 09 Jul 2025KDD 2025 Workshop on Prompt Optimization PosterEveryoneRevisionsBibTeXCC BY 4.0
Submission Type: Short
Keywords: Prompt learning, Out-of-distribution detection, Vision-language models, Subspace representation learning
TL;DR: We propose a prompt-tuning framework that enhances few-shot OOD detection by projecting in-distribution features into a learned subspace while pushing out-of-distribution features into its orthogonal complement.
Abstract: The reliability of artificial intelligence (AI) systems in open-world depends heavily on their ability to flag out-of-distribution (OOD) inputs, which are unseen during the training phase. Recent advances in large-scale vision-language models (VLMs) have enabled promising few-shot OOD detection frameworks using only a handful of in-distribution (ID) samples. However, existing prompt learning-based OOD methods rely solely on softmax probabilities, overlooking the rich discriminative potential of the feature embeddings learned by VLMs trained on millions of samples. To address this limitation, we propose a novel context optimization (CoOp)-based framework that integrates subspace representation learning with prompt tuning. Our approach improves ID-OOD separability by projecting the ID features into a subspace spanned by prompt vectors, while projecting ID-irrelevant features into an orthogonal null space. To train such OOD detection framework, we design an easy-to-handle end-to-end learning criterion that ensures strong OOD detection performance as well as high ID classification accuracy. Experiments on real-world datasets showcase the effectiveness of our approach.
Supplementary Material: zip
Submission Number: 26
Loading