Abstract: The rapid advancement of natural language processing (NLP) has been propelled by large language models (LLMs), yet their monolithic nature often results in inefficiencies, particularly in specialized training for complex tasks. Addressing this, we introduce \textit{Pool of Experts}, a novel multi-agent LLM framework that facilitates role specialization through prompt-based agentification, circumventing the computational cost of fine-tuning.
The methodology involves a structured two-stage process: initialization, where agents are configured with distinct expert roles based on task context, and inference, where agents collaboratively generate responses. We evaluate the impact of expert role selection on task accuracy across multiple datasets, employing decision-making strategies like Majority Voting and a Final Decision Maker. Our system outperforms state-of-the-art systems in complex tasks, such as Strategy QA and Last Letters Concat. While no single framework consistently excels, the choice of framework significantly influences performance in tasks. This research paves the way for more sophisticated and structured NLP systems, contributing to the advancement of multi-agent LLMs.
Paper Type: Long
Research Area: Generation
Research Area Keywords: zero-shot generation, prompting, meta learning, commonsense QA
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data analysis
Languages Studied: English
Submission Number: 4159
Loading