Keywords: Token/Prompt Tuning, Efficient Fine-Tuning, Visual prompt, Image captioning
TL;DR: ViPCap is a novel lightweight image captioning model that generates visual prompts from retrieved text containing visual semantic representations.
Abstract: Recent lightweight image captioning models using retrieved data mainly focus on text prompts. However, previous works only utilize the retrieved data as text prompts, while the visual information relies only on the vision encoder. This leads to a limitation that the image descriptions in the prompt are not sufficiently reflected in the visual representations. To tackle this issue, we propose ViPCap, a novel retrieval text-based visual prompt for lightweight image captioning. ViPCap leverages the retrieved text with image information as visual prompts to enhance the ability of the model to capture relevant visual information. By mapping text prompts into the CLIP space and sampling from Gaussian distributions, we effectively retrieve semantic features containing image information. These retrieved features are integrated into the image and designated as the visual prompt, leading to performance improvements on the datasets such as COCO, Flickr30k, and NoCaps. Experimental results demonstrate that ViPCap significantly outperforms prior lightweight captioning models in efficiency and effectiveness, demonstrating the potential for a plug-and-play solution.
Submission Number: 42
Loading