Securing Model Weights against Eavesdropping Adversaries in Federated Learning using Quantization
TL;DR: Unlike prior work focused on data privacy, we propose a lightweight defense with a formal guarantee on model reconstruction error, providing persistent protection of client models in federated learning while preserving global accuracy.
Abstract: While security research in Federated Learning (FL) has predominantly focused on protecting client data, the confidentiality of the model parameters themselves represents a critical and underexplored vulnerability. This work addresses model reconstruction attacks by passive eavesdroppers, a threat present in common update strategies like transmitting full models (FLOP) or model increments (FLIP). To our knowledge, we are the first to repurpose dynamic uniform quantization as a dedicated defense for model confidentiality. Our lightweight, architecture-agnostic approach combines low-bit quantization with an adaptive clipping rule to thwart reconstruction attacks, even under warm adversary initialization. We provide theoretical guarantees establishing that our defense offers persistent, non-zero protection in both protocols. Across extensive experiments on CIFAR-10 and CIFAR-100, with up to 1000 clients in heterogeneous settings, our method reduces the adversary's test accuracy to near-random levels while maintaining global model accuracy within 4\% of the unquantized baseline. Our findings establish that repurposing quantization is a simple yet highly effective strategy for securing the largely overlooked area of model confidentiality in FL.
Submission Number: 1716
Loading