Revisit Model Adaptation from Parameters to Features

17 Sept 2025 (modified: 14 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Parameter-Efficient Fine-Tuning, Cross-Model Transfer, Diffusion Models
TL;DR: We propose a simple yet effective fine-tuning approach, which poses the adaptation components on the input and output vectors instead of model parameters, and achieves better cross-model transferability.
Abstract: In this paper, we focus on an intriguing question: Can existing fine-tuning adapters, such as LoRA, trained on one model be effectively transferred to its parameter-wise variants? To investigate this problem, we first examine the technical underpinnings of widely adopted parameter-efficient fine-tuning methods. Our theoretical analysis reveals that, due to the strong coupling between adaptation components and base weights, these methods are vulnerable to weight transformations, leading to unsatisfactory cross-model performance and potential model-specific overfitting. To alleviate this issue, we accordingly propose two alternatives, which pose the adaptation on the input and output features, respectively, with an explicit decoupling scheme. In this way, the adaptation components for an unseen base model can be modulated by its native parameters and thus exhibit more robust transferability. Notably, the proposed methods can serve as plug-and-play components with merely one-line code modifications required. Though extremely simple, extensive experiments across a variety of models and applications demonstrate that our method achieves comparable performance to existing counterparts on the source model and consistently outperforms them in cross-model transfer settings.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 9006
Loading