Abstract: Recently, membership attacks are a well-known thread to disclose the training data of deep learning models, which can leak sensitive data under several circumstances. To prevent such attacks, improving the model's generalization is one of the key approaches. Such generalization capability of the deep model can be achieved through the careful examination of loss landscape and the usage of large learning rate. Based on the popular stochastic gradient optimizer, our work explores the connection between the training learning rate and the resulting model's loss landscape in defensing against the membership attacks. We found out that flat surfaces in the loss landscape which come from a large learning rate tend to preserve the model's privacy better while maintains a good prediction accuracy. We validate our findings with three model architectures, ResNet-18, VGG-11 and 2-layers MLP on the two popular datasets, FashionMNIST and CIFAR10. The results show that a large learning rate can be used to improve the model privacy from 2–4% with better models' accuracy.
Loading