Enhance membership inference attacks in federated learning

Published: 01 Jan 2024, Last Modified: 13 Nov 2024Comput. Secur. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In Federated learning, models in training will unintentionally memorize detailed information about private data, and the aggregation process on the central server requires users to upload their model parameters, making the models still susceptible to membership inference attacks. However, the existing membership inference attacks in federated learning have been less effective. This paper proposes a new membership inference attack method in federated learning that utilizes data poisoning and sequence prediction confidence. By injecting toxic data, the model can remember the detailed information of specific classes in the target private dataset to the maximum extent. The privacy data detailed information of the target clients contained in the model will be represented through the output confidence vector. Afterward, we aggregate the confidence information obtained from multiple epochs in federated learning and utilize the AdaBoost classifier to learn the details from it. Finally, we use different thresholds to partition the predicted confidence scores output by the AdaBoost classifier obtaining the membership information. We conducted experiments on multiple datasets and models to validate the effectiveness of our attack method. The results showed a high attack effectiveness with minimal overall model performance degradation.
Loading