Resource-efficient Inference with Foundation Model Programs

Published: 08 Jul 2025, Last Modified: 26 Aug 2025COLM 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Neurosymblic Programming, Multimodal LMs, Agent Programming, LLM Computational Efficiency, Multi-modal Reasoning
TL;DR: We give a program synthesis method for resource-efficient multimodal reasoning in streaming tasks. Our programs exploit task structure and tailor submodules to each input, so as to achieve an optimal cost-performance tradeoff.
Abstract: The inference-time resource costs of large language and vision models present a growing challenge in production deployments. We propose the use of ***foundation model programs***, i.e., programs that can invoke foundation models with varying resource costs and performance, as an approach to this problem. Specifically, we present a method that translates a task into a program, then learns a policy for resource allocation that, on each input, selects foundation model "backends" for each program module. The policy uses smaller, cheaper backends to handle simpler subtasks, while allowing more complex subtasks to leverage larger, more capable models. We evaluate the method on two new "streaming" visual question-answering tasks in which a system answers a question on a sequence of inputs, receiving ground-truth feedback after each answer. Compared to monolithic multi-modal models, our implementation achieves up to 98\% resource savings with minimal accuracy loss, demonstrating its potential for scalable and resource-efficient multi-modal inference. The source code and the benchmarks are available at [GitHub](https://github.com/Flitternie/FMProgramming).
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
Author Guide: I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
Submission Number: 847
Loading