Prompt Tuning Is All We Need?

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: vision-language models, prompt tuning, domain generalization
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Recent advances in pre-trained vision-language models, e.g., CLIP, have demonstrated remarkable success in domain generalization (DG) by tuning prompts. To promote DG, one promising method is to explore how to design or learn more sweet prompts, i.e., prompt learning. The implicit intuition of it is that a more elaborate prompt learning method can lead to higher generalization performance. The foundation intuition motivates us to raise a question: Prompt tuning is all we need? To verify whether the intuition holds for DG, we design comprehensive experiments on DG benchmarks. However, our experiments demonstrate a pessimistic conclusion that simply tuning prompts using training sets can achieve comparable performance with that using test sets. Namely, even the optimal prompts can hardly bring significant performance gain than a simple tuning strategy. Our experiments show that this results from the non-separability of features extracted by the image encoder. Thus, we propose image encoder tuning, named Im-Tuning, for more separable image features. We conduct extensive experiments on multiple DG benchmarks, demonstrating that Im-Tuning can consistently outperform the relevant state-of-the-art methods.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4475
Loading