Abstract: The rise of generative AI is transforming the landscape of digital imagery, and exerting a significant influence on online creative communities. This has led to the emergence of AI-Generated Content (AIGC) social platforms, such as Civitai. These distinctive social platforms allow users to build and share their own generative AI models, thereby enhancing the potential for more diverse artistic expression. Designed in the vein of social networks, they also provide artists with the means to showcase their creations (generated from the models), engage in discussions, and obtain feedback, thus nurturing a sense of community. Yet, this openness also raises concerns about the abuse of such platforms, e.g., using models to disseminate deceptive deepfakes or infringe upon copyrights. To explore this, we conduct the first comprehensive empirical study of an AIGC social platform, focusing on its use for generating abusive content. As an exemplar, we construct a comprehensive dataset covering Civitai, the largest available AIGC social platform. Based on this dataset of 87K models and 2M images, we explore the characteristics of content and discuss strategies for moderation to better govern these platforms.
Primary Subject Area: [Generation] Social Aspects of Generative AI
Secondary Subject Area: [Experience] Art and Culture, [Generation] Generative Multimedia
Relevance To Conference: This paper is highly relevant to the topic of "social aspects of generative AI", as it presents a thorough empirical analysis of Civitai, an online social platform for generative AI models and AI-generated content (AIGC). Our findings provide valuable understanding of the characteristics of abusive content on AIGC social platforms like Civitai, and offer guidance for moderators and moderation systems to better regulate these platforms. This will help these platforms in promoting fairness, transparency, and a more positive user experience in the era of generative AI.
Supplementary Material: zip
Submission Number: 2325
Loading