- Keywords: decentralized attribution, generative models
- TL;DR: This paper investigates the feasibility of decentralized attribution of generative models, addressing growing concerns regarding the fabrication of contents through them.
- Abstract: There have been growing concerns regarding the fabrication of contents through generative models. This paper investigates the feasibility of decentralized attribution of such models. Given a group of models derived from the same dataset and published by different users, attributability is achieved when a public verification service associated with each model (a linear classifier) returns positive only for outputs of that model. Attribution allows tracing of machine-generated contents back to its source model, thus facilitating IP-protection and content regulation. Decentralized attribution prevents forgery of source models by only allowing users to have access to their own classifiers, which are parameterized by keys distributed by a registry. Our major contribution is the development of design rules for the keys, which are derived from first-order sufficient conditions for decentralized attribution. Through validation on MNIST, CelebA and Cityscapes, we show that keys need to be (1) orthogonal or opposite to each other and (2) belonging to a subspace dependent on the data distribution and the architecture of the generative model. We also empirically examine the trade-off between generation quality and robust attributability against adversarial post-processes of model outputs.