Do Larger Batch Sizes Always Help? Revisiting Incentive-Driven Differential Privacy Federated Learning

Published: 2026, Last Modified: 21 Jan 2026IEEE Trans. Cogn. Commun. Netw. 2026EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Differential Privacy Federated Learning (DP-FL) combines Differential Privacy (DP) with Federated Learning (FL), enabling multiple clients to collaboratively train a shared model while protecting data privacy. However, introducing DP into FL will add noise to model parameters, which typically deteriorates model convergence. Although a recent work has revealed the compensation effect by increasing the total batch size, it overlooks the “generalization gap” phenomenon, which is induced by excessively large batch size and has been long discussed in the machine learning field. In order to avoid the other extreme, we strengthen several core components in, and propose an Incentive-driven Differential Privacy Federated Learning (IDP-FL) framework. First, instead of building all hopes on batch sizes, the proposed framework jointly considers the non-IID degrees of local data and clients’ privacy budgets, minimizing the difference between the optimal batch size for each selected client and its corresponding critical batch size. Second, we reconfigure the batch size for each selected client by balancing the negative impact of DP noise on convergence and of the “generalization gap” phenomenon. Finally, we design a Stackelberg game-based incentive mechanism that encourages clients to contribute computational resources, and prove the existence of a Stackelberg equilibrium to guarantee stability. Through numerical evaluations on real-world datasets, we show that our IDP-FL framework outperforms existing algorithms in terms of test accuracy and utility. Ablation studies further confirm the effectiveness of each component.
Loading