Exploring Sparsity for Parameter Efficient Fine Tuning Using Wavelets

19 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: PEFT, Fine Tuning, Diffusion, Image Generation, Personalization
TL;DR: We propose WaveFT, an extensible, flexible and more efficient Parameter Efficient Fine Tuning (PEFT) method than existing methods.
Abstract: Efficiently adapting large pretrained models is critical under tight compute and memory budgets. While PEFT methods like LoRA achieve efficiency through low-rank updates, their discrete rank constraint limits fine-grained parameter control and confines adaptations to low-dimensional subspaces. We propose Wavelet Fine-Tuning (WaveFT), which learns sparse updates in the wavelet domain of weight matrices, enabling fine-grained control over trainable parameters well below LoRA's minimum rank. Wavelet bases provide semi-local receptive fields that aggregate spatially coherent gradients, offering better coverage than direct weight sparsity (SHiRA) without the destructive interference of global Fourier bases (FourierFT). This structure naturally matches vision tasks where gradients are sparse during fine-tuning, since most pretrained weights require minimal adjustment. We provide theoretical analysis showing: (i) sparse methods achieve high-rank updates, avoiding LoRA's subspace bottleneck and enabling higher representational capacity, and (ii) a gradient coverage framework explaining when wavelet-domain adaptation outperforms alternatives. We perform experiments across text-to-image generation (SDXL), image classification (ViT), and language understanding (GLUE). WaveFT demonstrates state-of-the-art results among PEFT methods for vision tasks, where wavelets effectively capture sparse gradient structure through improved coverage, while performing comparably on NLP benchmarks.
Primary Area: generative models
Submission Number: 16635
Loading