Frequency-Domain Model Fingerprinting for Image Autoregressive Models

AAAI 2026 Workshop AIGOV Submission36 Authors

21 Oct 2025 (modified: 25 Nov 2025)AAAI 2026 Workshop AIGOV SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: image generation, fingerprinting, backdoor attacks, autoregressive models, infinity
TL;DR: We propose FreqIAR, the first framework to safeguard the model IP of Image Autoregressive Models.
Abstract: Image Autoregressive Models (IARs) have shown remarkable performance in generating high-quality images. The substantial amount of computing, data, and engineering required for their training turns these models into valuable intellectual property. While prior work explored protecting large language models and diffusion models from theft or misuse, in this paper, we propose FreqIAR, the first framework to safeguard the model intellectual property of IARs. Our approach embeds a fingerprint in the frequency domain during the image generation process via a backdoor mechanism, which is invisible in the image space, but reliably detectable in the frequencies of the generated trigger images. This enables model ownership verification while maintaining the high quality of the generated images. Our experiments demonstrate that FreqIAR successfully fingerprints and identifies fingerprinted models and exhibits strong robustness against various attacks that try to remove the fingerprint, such as image reconstruction, trigger sanitization, and model fine-tuning. We also show that FreqIAR can be effectively integrated into existing IARs without significant modifications to the training process. Overall, our work contributes to a more trustworthy deployment of IARs
Submission Number: 36
Loading