Abstract: Large language models (LLMs) still face challenges in complex reasoning within multi-agent debate (MAD) systems due to high computational costs in fully-connected structures. While existing methods use static sparse topologies to reduce computation, they neglect semantic relationships and dynamic opinion evolution. To solve this challenge, we propose ASMAD, an adaptive sparse topology framework that synergizes sociophysical opinion dynamics with LLMs through two innovations: (1) probabilistic semantic-guided attention gates for dynamic opinion visibility control; (2) a hybrid paradigm combining adaptive trust-boundary regulation and opinion synchronization. Experiments show ASMAD reduces token costs to 1/3 across GSM8K and MMLU benchmarks while maintaining competitive accuracy with 4-bit quantized 7-9B size models.
Paper Type: Short
Research Area: Efficient/Low-Resource Methods for NLP
Research Area Keywords: task-oriented,factuality,evaluation and metrics,commonsense reasoning,
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English
Submission Number: 7566
Loading