Track: tiny / short paper (3-5 pages)
Keywords: watermarking, metadata, generative AI, robustness, research, regulation, regulators, transparency, provenance, labeling, disclosure, materiality
TL;DR: This paper examines our organization’s approach to marking AI content and providing user transparency, highlighting lessons learned–and the challenges ahead.
Abstract: The proliferation of generative-AI (“GenAI”) technology promises to revolutionize content creation across online platforms. This advancement has sparked significant public debate concerning transparency around AI-generated content. As the difference between human-generated and synthetic content is blurred, people increasingly want to know where the boundary lies. Invisible and visible watermarks, content labels, and IPTC and C2PA metadata are some of the technical approaches in use by Meta and by the industry at large today to enable transparency of AI-created or AI-edited content online. This paper examines Meta’s approach to marking AI content and providing user transparency, highlighting lessons learned–and the challenges ahead–in striving for effective AI transparency, including suggestions for research areas most likely to advance industry solutions for indirect disclosure and user transparency for GenAI content. Key challenges have included the lack of robustness of metadata, imperfect robustness of watermarks, difficulty in defining "materiality" for AI edits, and how to provide users appropriate transparency, and evolving understanding and expectations over time. We provide details of Meta’s experience launching labels for first- and third-party content–both fully AI generated and AI edited–at a global scale using GenAI signals from IPTC, C2PA, and known invisible watermarks and the challenge of meeting user expectations related to materiality of edits and choice of language, resulting in changes to our approach. This paper focuses specifically on transparency related to user generated content that is non-commercial in nature.
Presenter: ~Elizabeth_Hilbert1
Format: Yes, the presenting author will attend in person if this work is accepted to the workshop.
Funding: No, the presenting author of this submission does *not* fall under ICLR’s funding aims, or has sufficient alternate funding.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 52
Loading