OVA-LP: A Simple and Efficient Framework for Federated Learning on Non-IID Data

ICLR 2026 Conference Submission25470 Authors

20 Sept 2025 (modified: 23 Dec 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: federated learning, non-iid, noisy labels, One-vs-All, Linear Probing
Abstract: Federated fine-tuning (FFT) adapts foundation models to decentralized data but remains fragile under heterogeneous client distributions due to local drift—client-level update divergences that induce systematic bias and amplified variance in the global model. Existing aggregation and personalization approaches largely correct drift post hoc, which can be brittle under extreme Non-IID conditions. We introduce OvA-LP, a minimalist FFT framework that suppresses drift at its source by combining linear probing on a frozen encoder, one-vs-all heads, and a two-stage schedule informed by a bias–variance perspective. OvA-LP demonstrates strong Non-IID robustness and substantially outperforms state-of-the-art PEFT baselines on CIFAR-100 and DomainNet, while maintaining stable performance across participation ratios. Although performance decreases under the most severe domain-shift configuration, OvA-LP exhibits significantly improved stability in practical settings and generalizes across diverse datasets, model architectures, and modalities. These results highlight source-level drift suppression as a viable alternative direction for federated fine-tuning, expanding the design space beyond adaptation-centric approaches.
Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 25470
Loading