Block ModShift: Model Privacy via Dynamic Designed Shifts

Published: 24 Sept 2025, Last Modified: 18 Nov 2025AI4NextG @ NeurIPS 25 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Model privacy, distributed optimization, federated learning, Fisher Information Matrix, estimation bounds, convergence
Abstract: The problem of mutli-shot model privacy against an eavesdropper (Eve) in a distributed learning environment is investigated. The solution is found via evaluating the Fisher Information Matrix (FIM) for the model learning problem for Eve. Through a model shift design process, the eavesdropper’s FIM can be driven to singularity, yielding a provably hard estimation problem for Eve. The solution has time-varying shifts that prevent Eve from using the temporal correlation of the updates to aid her in her estimation. A convergence test for Eve is designed to determine if model updates have been tampered with. However, under a bounded gradient dissimilarity assumption, the Block ModShift strategy passes the test and thus the shifts are not detectable. Block ModShift is compared against a noise injection scheme and shown to offer superior performance. We numerically show the efficacy of Block ModShift in preventing temporal leakage in a setup biased towards Eve’s learning ability where she uses Kalman smoothing to estimate updates.
Submission Number: 61
Loading