Archiving Submission: No (non-archival)
Previous Venue If Non Archival: Under review
Keywords: image tokenizers, audio tokenizers, consistency, watermarking, multimodal
TL;DR: We study token-level watermarking of autoregressive image generation models, identifying and resolving key roadblocks that stem from tokenization.
Abstract: Watermarking the outputs of generative models has emerged as a promising approach for tracking their provenance. Despite significant interest in autoregressive image generation models and their potential for misuse, no prior work has attempted to watermark their outputs at the token level. In this work, we present the first such approach by adapting language model watermarking techniques to this setting. We identify a key challenge: the lack of reverse cycle-consistency (RCC), wherein re-tokenizing generated image tokens significantly alters the token sequence, effectively erasing the watermark. To address this and to make our method robust to common image transformations and removal attacks, we introduce a custom tokenizer-detokenizer finetuning procedure that improves RCC and a watermark synchronization step. As our experiments demonstrate, our approach enables robust watermark detection with theoretically grounded p-values.
Submission Number: 11
Loading