Keywords: learning theory, watermarking, adversarial defense
Abstract: As AI becomes omnipresent in today's world, it is crucial to study the safety aspects of learning, such as guaranteed watermarking capabilities and defenses against adversarial attacks.
In prior works, these properties were generally studied separately and empirically barring a few exceptions.
Meanwhile, strong forms of adversarial attacks that are *transferable* had been developed (empirically) for discriminative DNNs (Liu et al., 2016) and LLMs (Zou et al., 2023).
In this ever-evolving landscape of attacks and defenses, we initiate the formal study of watermarks, defenses, and transferable attacks for classification, under a *unified framework*, by having two time-bounded players participate in an interactive protocol.
Consequently, we show that for every learning task, at least one of the three schemes exists.
Importantly, our results cover regimes where VC theory is not necessarily applicable.
Finally we provide provable examples of the three schemes and show that transferable attacks exist only in regimes beyond bounded VC dimension. The example we give is a nontrivial construction based on cryptographic tools, i.e. homomorphic encryption.
Submission Number: 25
Loading