Exploring Sparsity for Parameter Efficient Fine Tuning Using Wavelets

ICLR 2026 Conference Submission16635 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: PEFT, Fine Tuning, Diffusion, Image Generation, Personalization
TL;DR: We propose WaveFT, an extensible, flexible and more efficient Parameter Efficient Fine Tuning (PEFT) method than existing methods.
Abstract: Efficiently adapting large vision models is critical, especially with tight compute and memory budgets. Parameter-Efficient Fine-Tuning (PEFT) methods such as LoRA offer limited granularity and effectiveness in few-parameter regimes. We propose Wavelet Fine-Tuning (WaveFT), a novel PEFT method that learns highly sparse updates in the wavelet domain of residual matrices. WaveFT allows precise control of trainable parameters, offering fine-grained capacity adjustment and excelling with remarkably low parameter count, substantially fewer than LoRA's minimum, making it ideal for extreme parameter-efficient scenarios. Evaluated on personalized text-to-image generation using Stable Diffusion XL as baseline, WaveFT significantly outperforms the state-of-the-art in PEFT, especially at low parameter counts; achieving superior subject fidelity, prompt alignment, and image diversity.
Primary Area: generative models
Submission Number: 16635
Loading