Keywords: foundation models, adaptation, parameter-efficient fine-tuning
Abstract: Parameter-efficient fine-tuning (PEFT) methods, such as LoRA, reduce adaptation cost by injecting low-rank updates into pretrained weights. However, LoRA’s down-projection is randomly initialized and data-agnostic, discarding potentially useful information. Prior analyses show that this projection changes little during training, while the up-projection carries most of the adaptation, making the random input compression a performance bottleneck. We propose IPA, a feature-aware projection framework that explicitly preserves information in the reduced hidden space. In the linear case, we instantiate IPA with algorithms approximating top principal components, enabling efficient projector pretraining with negligible inference overhead. Across language and vision benchmarks, IPA consistently improves over LoRA and DoRA, achieving up to 1.5 points higher accuracy on commonsense reasoning and 2.3 points on VTAB-1k, while matching best baseline performance with roughly half the trainable parameters when the projection is frozen.
Serve As Reviewer: ~Yuan_Yin1
Submission Number: 44
Loading