Task Matrices: Linear Maps for Cross-Model Finetuning Transfer

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: applied interpretability, model transfer, model adaptation, probing, early exiting
TL;DR: Taking advantage of implicit linearities in finetuned models, we develop task matrices which outperform linear probes on vision and text tasks
Abstract: Results in interpretability suggest that large vision and language models learn implicit linear encodings when models are biased by in-context prompting. However, the existence of similar linear representations in more general adaptation regimes has not yet been demonstrated. In this work, we develop the concept of a task matrix, a linear transformation from a base to finetuned embedding state. We demonstrate that for vision and text models and ten different datasets, a base model augmented with a task matrix achieves results surpassing linear probes, sometimes approaching finetuned levels. We show that linear encoding in transformer embedding spaces exists between pretrained and finetuned architectures, and can be readily exploited through task matrices. These matrices incur low computational costs, and are both data-efficient and generalizable in multiple domains. We make our implementation publicly available.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 13653
Loading