LFMA: Parameter-Efficient Fine-Tuning via Layerwise Fourier Masked Adapter with Top-k Frequency Selection

05 Sept 2025 (modified: 16 Oct 2025)Submitted to NeurIPS 2025 2nd Workshop FM4LSEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LFMA
Abstract: Low-Rank Adaptation (LoRA) has been widely adopted as a Parameter Efficient Fine-Tuning method for large models such as Large Language Models (LLMs) and Vision Transformer (ViT). However, it encounters scalability limitations, particularly in storage and deployment efficiency, when applied to large foundational models or a wide range of task-specific adaptations, due to the overhead of managing multiple adapters and the reliance on linearly constrained spaces for representation. To address these limitations, Fourier Fine-Tuning (FourierFT) has emerged as an alternative, leveraging the Fourier transform to achieve comparable or superior performance to LoRA while utilizing significantly fewer trainable parameters. Nevertheless, FourierFT targets the entire frequency spectrum to apply updates, which may cause inefficiency, particularly when the meaningful information is concentrated within a specific set of frequency components. The magnitude of each Fourier component reflects its contribution to the original weight update. Thus, selecting Top‑K components with the highest magnitudes effectively captures the most informative changes. Therefore, we propose Layerwise Fourier Masked Adapter (LFMA), which selectively fine-tunes using Top-K informative frequency components and resulting in an enhancement of both parameter efficiency and task-specific adaptation. Empirically, we showed similar or better performance than FourierFT in four tasks: image classification, instruction tuning, natural language generation, and natural language understanding. These results demonstrate that selectively fine-tuning in the most informative frequency components is able to push the limits of adapter-based fine-tuning further in terms of scalability and expressivity.
Submission Number: 42
Loading