Exploring Diverse In-Context Configurations for Image Captioning

Published: 21 Sept 2023, Last Modified: 12 Jan 2024NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Image Caption; Few-shot Prompt; Vision Language Model;
TL;DR: Using Open-Flamingo to explore the effects of diverse in-context configurations for image captioning.
Abstract: After discovering that Language Models (LMs) can be good in-context few-shot learners, numerous strategies have been proposed to optimize in-context sequence configurations. Recently, researchers in Vision-Language (VL) domains also develop their few-shot learners, while they only use the simplest way, \ie, randomly sampling, to configure in-context image-text pairs. In order to explore the effects of varying configurations on VL in-context learning, we devised four strategies for image selection and four for caption assignment to configure in-context image-text pairs for image captioning. Here Image Captioning is used as the case study since it can be seen as the visually-conditioned LM. Our comprehensive experiments yield two counter-intuitive but valuable insights, highlighting the distinct characteristics of VL in-context learning due to multi-modal synergy, as compared to the NLP case. Furthermore, in our exploration of optimal combination strategies, we observed an average performance enhancement of 20.9 in CIDEr scores compared to the baseline. The code is given in https://github.com/yongliang-wu/ExploreCfg.
Submission Number: 5771
Loading