Keywords: Natural Language Processing, Multi-Agent, Large Language Model, Consensus Seeking
TL;DR: We propose a Belief-Calibrated Consensus Seeking (BCCS) framework to facilitate stable consensus in multi-agent system via selecting optimal collaborators and calibrating the consensus judgment by system-internal beliefs.
Abstract: A multi-agent system (MAS) enhances its capacity to solve complex natural language processing (NLP) tasks through collaboration among multiple agents, where consensus-seeking serves as a fundamental mechanism.
However, existing consensus-seeking approaches typically rely on voting mechanisms to judge consensus, overlooking contradictions in system-internal beliefs that destabilize the consensus.
Moreover, these methods often involve agents updating their results through indiscriminate collaboration with every other agent.
Such uniform interaction fails to identify the optimal collaborators for each agent, hindering the emergence of a stable consensus.
To address these challenges, we provide a theoretical framework for selecting optimal collaborators that maximize consensus stability.
Based on the theorems, we propose the Belief-Calibrated Consensus Seeking (BCCS) framework to facilitate stable consensus via selecting optimal collaborators and calibrating the consensus judgment by system-internal beliefs.
Experimental results on the MATH and MMLU benchmark datasets demonstrate that the proposed BCCS framework outperforms the best existing results by 2.23\% and 3.95\% of accuracy on challenging tasks, respectively.
Our code and data are available at https://github.com/dengwentao99/BCCS.
Supplementary Material: zip
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 16436
Loading