FedSAN: Replayable DP Accounting for Asynchronous Federated LLM Tuning under Out-of-Order Updates

ACL ARR 2026 January Submission5446 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Differential Privacy, Asynchronous Training, Privacy Accounting, Large Language Models, LoRA / Parameter-Efficient Fine-Tuning
Abstract: Asynchronous federated learning (FL) is a practical default for scaling instruction tuning of large language models (LLMs), but out-of-order (OOO) arrivals break the audit assumptions of deployed differential privacy (DP) accounting. We show that arrival-order accounting can mis-attribute snapshot-tied updates to privacy events, yielding a ledger that is not replayable from execution provenance and can violate strict budgets. We propose FedSAN-LLM: clients attach authenticated snapshot provenance, and the server deterministically reconstructs a snapshot-consistent released event stream for reconstructed-order DP accounting (R-RDP), producing an auditable ledger under OOO delivery; optionally, clients apply randomized subspace projection as DP post-processing to compress privatized LoRA updates without changing the privacy guarantee. On Llama-3-8B LoRA tuning for PubMedQA and FiQA, FedSAN-LLM matches near-synchronous accuracy while providing a $2.6\times$ wall-clock speedup over synchronized training with small audit deviation.
Paper Type: Long
Research Area: Machine Learning for NLP
Research Area Keywords: optimization methods, generalization, transfer learning / domain adaptation, representation learning
Contribution Types: NLP engineering experiment, Approaches low compute settings-efficiency, Theory
Languages Studied: English
Submission Number: 5446
Loading