Ta-Adapter: Enhancing few-shot CLIP with task-aware encoders

Published: 01 Jan 2024, Last Modified: 01 Oct 2024Pattern Recognit. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Highlights•Our model design combines both the advantages of prompt learning and adapter tuning.•Align CLIP visual and textual encoders with specific datasets via few-shot images.•Our model further enhances CLIP’s few-shot capability, obtaining superior results.
Loading