Why Does Private Fine-Tuning Resist Differential Privacy Noise? A Representation Learning Perspective
Keywords: Vision transformer; differential privacy; representation learning
TL;DR: We adopt a representation law to measure the quality of representations when privately fine-tuning vision transformers.
Abstract: In this paper, we investigate the impact of differential privacy (DP) on the fine-tuning of publicly pre-trained models, focusing on Vision Transformers (ViTs). We introduce an approach for analyzing the DP fine-tuning process by leveraging a representation learning law to measure the separability of features across intermediate layers of the model. Through a series of experiments with ViTs pre-trained on ImageNet and fine-tuned on a subset of CIFAR-10, we explore the effects of DP noise on the learned representations. Our results show that, without proper hyperparameter tuning, DP noise can significantly degrade feature quality, particularly in high-privacy regimes. However, when hyperparameters are optimized, the impact of DP noise on the learned representations is limited, leading to high model accuracy even in high-privacy settings. These findings provide insight into how pre-training on public datasets can help mitigate the privacy-utility trade-off in private deep learning applications.
Submission Number: 49
Loading