Portable Reward Tuning: Towards Reusable Fine-Tuning across Different Pretrained Models

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: While foundation models have been exploited for various expert tasks with their fine-tuned parameters, any foundation model will be eventually outdated due to its old knowledge or limited capability, and thus should be replaced by a new foundation model. Subsequently, to benefit from its latest knowledge or improved capability, the new foundation model should be fine-tuned on each task again, which incurs not only the additional training cost but also the maintenance cost of the task-specific data. Existing work address this problem by inference-time tuning, i.e., modifying the output probability from the new foundation model by the outputs from the old foundation model and its fine-tuned model, which involves an additional inference cost by the latter two models. In this paper, we explore a new fine-tuning principle (which we call portable reward tuning; PRT) that reduces the inference cost by its nature, based on the reformulation of fine-tuning as the reward maximization with Kullback-Leibler regularization. Specifically, instead of fine-tuning parameters of the foundation models, PRT trains the reward model explicitly through the same loss as in fine-tuning. During inference, the reward model can be used with any foundation model (with the same set of vocabularies or labels) through the formulation of reward maximization. Experimental results, including both vision and language models, show that the PRT-trained model can achieve comparable accuracy with less inference cost, in comparison to the existing work of inference-time tuning.
Lay Summary: As foundation models are frequently updated to incorporate newer data or improved architectures, users are often required to fine-tune each new model for their specific tasks, which is both time-consuming and costly. This paper introduces **Portable Reward Tuning** (PRT), a new framework that trains an external reward model rather than directly tuning the foundation model itself. This reward model can then be reused with any compatible foundation model, allowing users to transfer the benefits of prior fine-tuning without additional training. Experiments on both vision and language tasks demonstrate that PRT achieves accuracy on par with previous methods, but with lower computational cost and simpler deployment. This approach could make it much easier and cheaper to keep AI systems up-to-date as foundational models rapidly evolve, helping users to benefit from AI advancements with reduced operating expense.
Primary Area: Deep Learning->Foundation Models
Keywords: inference-time tuning, reward maximization
Submission Number: 15647
Loading