GL Equivariant Metanetworks for Learning on Low Rank Weight Spaces

Published: 08 Nov 2025, Last Modified: 08 Nov 2025LOG 2025 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Equivariance, symmetry, geometric deep learning, weight-space architectures, finetuning
Abstract: Low-rank adaptations (LoRAs) have revolutionized the finetuning of large foundation models, enabling efficient adaptation even with limited computational resources. The resulting proliferation of LoRAs together with the recent advances of weight-space learning present exciting opportunities for applying machine learning techniques that take these low-rank weights themselves as inputs. In this paper, we investigate the potential of Learning on LoRAs (LoL), a setup where machine learning models learn and make predictions on datasets of LoRA weights. Motivated by previous weight-space learning works, we first identify the inherent parameter symmetries of our data -- low-rank decompositions of weights -- which differ significantly from the parameter symmetries of standard neural networks. To efficiently process LoRA weights, we develop several symmetry-aware invariant or equivariant LoL models. In diverse experiments, we show that our LoL architectures can process LoRA weights to predict CLIP scores, finetuning data attributes, finetuning data membership, and accuracy on downstream tasks. We also show that LoL models trained on LoRAs of one pretrained model can effectively generalize to LoRAs trained on other models from the same model family. As an example of the utility of LoL, our LoL models can accurately estimate CLIP scores of diffusion models and ARC-C test accuracy of LLMs over 50,000 times faster than standard evaluation. As part of this work, we finetuned and will release datasets of more than ten thousand text-to-image diffusion-model and language-model LoRAs.
Software: https://anonymous.4open.science/r/LoL-925B/
Submission Type: Full paper proceedings track submission (max 9 main pages).
Submission Number: 156
Loading