Efficient Fine-Tuning via Behavior-Guided Spectral Alignment

ICLR 2026 Conference Submission16728 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Behavior alignment, representation learning, teacher-free learning, low-data learning, internal representation dynamics, efficient transfer learning
TL;DR: BAFT adds behavioral guidance to PEFT by aligning internal features with model predictions
Abstract: Parameter-Efficient Fine-Tuning (PEFT) has become a practical approach for adapting large vision models with limited data and computational resources. However, existing PEFT methods primarily focus on where to inject trainable parameters, providing little guidance on how internal representations evolve during adaptation. This often results in a passive fine-tuning process that lacks explicit alignment with the target task's structure, especially in settings with limited data or diverse tasks. We propose Behavior-Aligned Fine-Tuning (BAFT), a simple, parameter-free and teacher-free method that introduces behavioral constraints during fine-tuning without changing the model architecture. BAFT extracts the relational structure of model predictions, capturing how samples relate in the output space, and aligns it with intermediate feature representations by minimizing the distance between their cosine similarity matrices. This alignment acts as a lightweight, task-aware regularizer that guides internal representations to better reflect the decision structure of the target task. BAFT requires no additional trainable parameters, adds minimal overhead, and integrates seamlessly with a wide range of PEFT methods including LoRA, AdaptFormer, Bi-LoRA, and Bi-AdaptFormer. On VTAB-1k and few-shot fine-grained classification benchmarks, BAFT consistently improves performance compared to strong PEFT baselines. Analyses of gradient behavior, spectral alignment, and attention dynamics further demonstrate how BAFT promotes more structured and task-aligned representations. By transforming output-space behavior into actionable training signals, BAFT reframes fine-tuning as an active and guided process. This work offers a novel and principled direction for advancing parameter-efficient model adaptation.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 16728
Loading