Research Area: Societal implications, LMs and interactions
Keywords: Human-AI Interactions; Communication
TL;DR: Human-AI Interactions
Abstract: The increasing significance of large models and their multi-modal variants
in societal information processing has ignited debates on social safety and
ethics. However, there exists a paucity of comprehensive analysis for: (i)
the interactions between human and artificial intelligence systems, and
(ii) understanding and addressing the associated limitations. To bridge
this gap, we present Model Autophagy Analysis for large models’ selfconsumption explanation. We employ two distinct autophagous loops
(referred to as “self-consumption loops”) to elucidate the suppression of
human-generated information in the exchange between human and AI
systems. Through comprehensive experiments on diverse datasets, we
evaluate the capacities of generated models as both creators and disseminators of information. Our key findings reveal (i) A progressive prevalence of
model-generated synthetic information over time within training datasets
compared to human-generated information; (ii) The discernible tendency
of large models, when acting as information transmitters across multiple
iterations, to selectively modify or prioritize specific contents; and (iii) The
potential for a reduction in the diversity of socially or human-generated
information, leading to bottlenecks in the performance enhancement of
large models and confining them to local optima.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
Author Guide: I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
Submission Number: 454
Loading