Keywords: Multi-agent System, Large Language Model, Communication Efficiency
Abstract: Multi-agent systems using large language models (LLMs) have demonstrated impressive capabilities across various domains. However, current agent communication suffers from verbose output that overloads context and increases computational costs. Although existing approaches focus on compressing the message from the speaker side, they struggle to adapt to different listeners and identify relevant information. An effective way in human communication is to allow the listener to interrupt and express their opinion or ask for clarification. Motivated by this, we propose an interruptible communication framework that allows the agent who is listening to interrupt the current speaker. Through prompting experiments, we find that current LLMs are often overconfident and interrupt before receiving enough information. Therefore, we propose a learning method which predicts the appropriate interruption points based on the estimated future reward and cost. We evaluate our framework across various multi-agent scenarios, including 2-agent text pictionary games, 3-agent meeting scheduling, and 3-agent debate. Experiment results show that our HANDRAISER can reduce communication cost by 32.2% compared with the baseline with a comparable or superior task performance. Such learned interruption behavior can also generalize to different agents and tasks.
Primary Area: generative models
Submission Number: 21328
Loading