Token-by-Token Election: Improving Language Model Reasoning through Token-Level Multi-model Collaboration

ICLR 2025 Conference Submission221 Authors

13 Sept 2024 (modified: 13 Oct 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM, multi-model collaboration
Abstract: With the continuous development of large language models (LLMs), they have demonstrated amazing capabilities in many areas of natural language processing (NLP). However, due to their inherent limitations, the performance of a single model on many complex reasoning tasks has reached a bottleneck. A feasible solution is to introduce external feedback to further improve model performance, among which multi-model collaboration is a particularly promising approach. In this paper, we propose token-by-token election (TTE), a novel token-level multi-model collaboration strategy. Different from the common multi-model collaboration methods that operates at the overall answer level, TTE performs multi-model elections at the lowest token level. It selects the optimal token from the next token distributions given by multiple LLMs and then generates the answer autoregressively, allowing multiple LLMs to reach a consensus on each token. Inspired by human behavior, TTE consists of three election modes, including Cooperation, Competition, and Counting, all of which aim to sample the optimal token from multiple distributions. By strictly controlling the generation quality of each token, TTE can improve the quality of the overall answer and break through the performance bottleneck of a single LLM. Through extensive experiments on a variety of different types of reasoning benchmarks, we demonstrate the powerful performance of TTE, which further improves the performance compared to the current state-of-the-art single LLM and other multi-model collaborative methods. The code will be released on GitHub.
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 221
Loading