Keywords: federated learning, privacy, scalability, efficiency, gradient compression, membership inference attacks, data reconstruction attacks
TL;DR: ERIS is a serverless FL framework that combines model partitioning and gradient compression to reduce communication by 94%, eliminate the server bottleneck, and improve privacy guarantees without sacrificing accuracy.
Abstract: Scaling federated learning (FL) to billion-parameter models introduces critical trade-offs between communication efficiency, network load distribution, model accuracy, and privacy guarantees. Existing solutions often tackle these challenges in isolation, sacrificing accuracy or relying on costly cryptographic tools. We propose ERIS, a serverless FL framework that balances privacy and accuracy while eliminating the server bottleneck and significantly reducing communication overhead. ERIS combines a model partitioning strategy, distributing aggregation across multiple client-side aggregators, with a distributed shifted gradient compression mechanism. We theoretically prove that ERIS (i) converges at the same rate as FedAvg under standard assumptions, and (ii) bounds mutual information leakage inversely with the number of aggregators, enabling strong privacy guarantees with no accuracy degradation. Extensive experiments on image and text datasets—ranging from small networks to modern large language models—confirm our theory: compared to six baselines, ERIS consistently outperforms all privacy-enhancing methods and matches the accuracy of non-private FedAvg, while reducing model distribution time by up to $1000\times$ and communication cost by over 94\%, lowering membership inference attack success rate from $\sim$83\% to $\sim$65\%—close to the unattainable $\sim$64\% limit—and reducing data reconstruction to random-level quality. ERIS establishes a new Pareto frontier for scalable, privacy-preserving FL for next-generation foundation models without relying on heavy cryptography or noise injection.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 16811
Loading