Abstract: Visual question answering aims to provide responses to questions given visual input. Recently, visual programmatic models (VPMs), which generate programs to answer questions through large language models (LLMs), have attracted attention. However, they often require long input prompts to provide the LLM with sufficient API usage details to generate relevant code. To address this limitation, we propose AdaCoder, an adaptive prompt compression framework for VPMs. AdaCoder operates in two phases: a compression phase and an inference phase. In the compression phase, given a preprompt that describes all API definitions with example code snippets, a set of compressed preprompts is generated, each depending on a specific question type. In the inference phase, AdaCoder predicts the question type and chooses the appropriate corresponding compressed preprompt to generate code to answer the question. In experiments, we apply AdaCoder to ViperGPT and demonstrate that it reduces token length by 71.1%, while maintaining or even improving the performance of visual question answering.
Loading