Cohesive Conversations: Enhancing Authenticity in Multi-Agent Simulated Dialogues

Published: 10 Jul 2024, Last Modified: 26 Aug 2024COLMEveryoneRevisionsBibTeXCC BY 4.0
Research Area: LMs and interactions
Keywords: multi-agent dialogues, multi-agent communication, multi-session dialogues, machine-generated dialogues, inconsistencies and repetition in dialogues, discourse-level error detection, longitudinal dialogue analysis, generative agents, llm agents
TL;DR: We investigate problems in multi-agent simulated dialogues over a span of time and propose a Screening, Diagnosis, Re-generation framework to instantly correct inconsistencies and hallucinations while bolstering multi-dialogue diversity.
Abstract: This paper investigates the quality of multi-agent dialogues in simulations powered by Large Language Models (LLMs). Analyzing dialogues and memory over multiple sessions revealed significant issues such as repetition, inconsistency, and hallucination, exacerbated by the propagation of erroneous information. To combat these challenges, we propose a novel Screening, Diagnosis, and Regeneration (SDR) framework that detects and corrects utterance errors through a comprehensive process involving immediate issue identification, evidence gathering from past dialogues, and LLM analysis for utterance revision. By incorporating our SDR framework to Generative Agents (Park et al., 2023), we enhance the diversity, consistency, and factualness of the generated dialogues. This work presents a pioneering approach to enhancing dialogue quality in multi-agent simulations, establishing a new standard for future research in the field.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
Author Guide: I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
Submission Number: 416
Loading