Federated Learning with Binary Neural Networks: Competitive Accuracy at a Fraction of the Cost

ICLR 2026 Conference Submission13160 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Learning, Binary Neural Networks, Deep Learning, Adversarial Robustness
Abstract: Federated Learning (FL) preserves privacy by distributing training across devices. However, using DNNs is computationally demanding for the low-powered edge at inference. Edge deployment demands models that simultaneously optimize memory footprint and computational efficiency, a dilemma where conventional DNNs fail by exceeding resource limits. Traditional post-training binarization reduces model size but suffers from severe accuracy loss due to quantization errors. To address these challenges, we propose FedBNN, a rotation-aware binary neural network framework that learns binary representations directly during local training. By encoding each weight as a single bit $\{+1, -1\}$ instead of a $32$-bit float, FedBNN shrinks the model footprint, significantly reducing runtime (during inference) FLOPs and memory requirements in comparison to federated methods using real models. Evaluations on multiple benchmark datasets demonstrate that FedBNN reduces resource consumption greatly while performing similarly to existing federated methods using real-valued models.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 13160
Loading