Keywords: model versioning, robustness, scalability
TL;DR: Use hidden distributions to generate scalable and adversarially robust model versions.
Abstract: As the deployment of deep learning models continues to expand across industries, the threat of malicious incursions aimed at gaining
access to these deployed models is on the rise. Should an attacker gain access to a deployed model, whether through server breaches,
insider attacks, or model inversion techniques, they can then construct white-box adversarial attacks to manipulate the model's classification outcomes, thereby posing significant risks to organizations that rely on these models for critical tasks. Model owners need
mechanisms to protect themselves against such losses without the necessity of acquiring fresh training data - a process that typically
demands substantial investments in time and capital.
In this paper, we explore the feasibility of generating multiple versions of a model that possess different attack properties, without acquiring new training data or changing model architecture. The model owner can deploy one version at a time and replace a leaked version immediately with a new version. The newly deployed model version can resist adversarial attacks generated leveraging white-box access to one or all previously leaked versions. We show theoretically that this can be accomplished by incorporating
parameterized *hidden distributions* into the model training data, forcing the model to learn task-irrelevant features uniquely defined by the chosen data. Additionally, optimal choices of hidden distributions can produce a sequence of model versions capable of resisting compound transferability attacks over time. Leveraging our analytical insights, we design and implement a practical model versioning method for DNN classifiers, which leads to significant robustness improvements over existing methods. We believe our work presents a promising direction for safeguarding DNN services beyond their initial deployment.
Submission Number: 151
Loading