Securing Model Weights Against Eavesdropping Adversaries in Federated Learning Using Quantization

Published: 03 Feb 2026, Last Modified: 02 May 2026AISTATS 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Unlike prior work focused on data privacy, we propose a lightweight defense with a formal guarantee on model reconstruction error, providing persistent protection of client models in federated learning while preserving global accuracy.
Abstract: While security research in Federated Learning (FL) has predominantly focused on protecting client data, the *confidentiality of the model parameters* themselves represents a critical and underexplored vulnerability. This work addresses model reconstruction attacks by passive eavesdroppers, a threat present in common update strategies like transmitting full models or model increments. To our knowledge, we are the first to repurpose dynamic uniform quantization as a dedicated defense for model confidentiality. Our lightweight, architecture-agnostic approach combines low-bit quantization with an adaptive clipping rule to thwart reconstruction attacks, even under warm adversary initialization. We provide theoretical guarantees establishing that our defense offers persistent, non-zero protection in both protocols. Across extensive experiments on CIFAR-10 and CIFAR-100, with up to 1000 clients in heterogeneous settings, our method reduces the adversary's test accuracy to near-random levels while maintaining global accuracy within 4\% of the unquantized baseline. Our findings establish that repurposing quantization is a simple yet highly effective strategy for securing the largely overlooked area of model confidentiality in FL.
Code Dataset Promise: Yes
Code Dataset Url: https://webpages.charlotte.edu/dmaity/code.html
Signed Copyright Form: pdf
Format Confirmation: I agree that I have read and followed the formatting instructions for the camera ready version.
Submission Number: 1716
Loading