More Agents Is All You Need

TMLR Paper2706 Authors

17 May 2024 (modified: 25 May 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We find that, simply via a sampling-and-voting method, the performance of large language models (LLMs) scales with the number of agents instantiated. Also, this method is orthogonal to existing complicated methods to further enhance LLMs, while the degree of enhancement is correlated to the task difficulty. We conduct comprehensive experiments on a wide range of LLM benchmarks to verify the presence of our finding, and to study the properties that can facilitate its occurrence. Our code will be provided when the submission is accepted.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=6pNSLmLDHa
Changes Since Last Submission: We have removed the GitHub link from the abstract to preserve the anonymity of the authors.
Assigned Action Editor: ~Karthik_R_Narasimhan1
Submission Number: 2706
Loading