Optimization of Prompt Learning via Multi-Knowledge Representation for Vision-Language Models

Enming Zhang, Bingke Zhu, Yingying Chen, Qinghai Miao, Ming Tang, Jinqiao Wang

Published: 01 Jan 2025, Last Modified: 09 Nov 2025IEEE Transactions on MultimediaEveryoneRevisionsCC BY-SA 4.0
Abstract: Vision-language models (VLMs), such as CLIP, play a foundational role in various cross-modal applications. To fully leverage the potential of VLMs in adapting to downstream tasks, context optimization methods such as prompt tuning are essential. However, one key limitation is the lack of diversity in prompt templates, whether they are hand-crafted or learned through additional modules. This limitation restricts the capabilities of pretrained VLMs and can result in incorrect predictions in downstream tasks. To address this challenge, we propose context optimization with multi-knowledge representation (CoKnow), a framework that enhances prompt learning for VLMs with rich contextual knowledge. To facilitate CoKnow during inference, we train lightweight semantic knowledge mappers, which are capable of generating multi-knowledge representations for an input image without requiring additional priors. Experimentally, we conduct extensive experiments on 11 publicly available datasets, demonstrating that CoKnow outperforms a series of previous methods.
Loading