Track: tiny / short paper (3-5 pages)
Keywords: Representation-Learning, Self-Supervised Learning, Anomaly Detection
Abstract: The proliferation of AI-generated photorealistic faces—from GANs to diffusion models have become indistinguishable from authentic images. This poses significant privacy and security risks, enabling misinformation and identity fraud at scale on social media and other platforms. To detect these AI-Generated faces effectively, we propose a fundamentally new approach inspired by the intrinsic stylistic discrepancies between authentic and synthetic images. Our key insight is that even highly realistic AI-generated faces exhibit persistent differences in style representations, which manifest as distinguishable patterns in the W+ Style Space. We introduce a self-supervised style representation learning approach that captures intrinsic differences between actual and synthetic faces. By first learning the style distribution of authentic images, our method identifies deviations indicative of AI generation without relying on explicit generative watermarks. This enables strong generalization across unseen generators, including diffusion-based models. Experiments show high detection accuracy (93\%+) across multiple generative datasets and significant improvements in cross-domain settings.
Presenter: ~Tharun_Anand1
Format: No, the presenting author is unable to, or unlikely to be able to, attend in person.
Funding: No, the presenting author of this submission does *not* fall under ICLR’s funding aims, or has sufficient alternate funding.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 14
Loading