Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model CollaborationDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
TL;DR: We introduce Corex, a suite of strategies designed to enhance the capabilities of LLMs in complex task-solving, with a pivotal focus on advancing multi-model collaboration.
Abstract: Large Language Models (LLMs) are evolving at an unprecedented pace and have exhibited considerable capability in the realm of natural language processing (NLP) with world knowledge. Benefiting from ultra-large-scale training corpora, a single LLM can manage typical NLP tasks competently. However, its performance in executing complex tasks is still confined by the limitations of its internal representation. To push this boundary further, we introduce Corex, a suite of novel general-purpose strategies that transform LLMs into autonomous agents, pioneering multi-model collaborations for task-solving. Inspired by human behaviors, Corex is constituted by diverse collaboration paradigms including Discuss, Review, and Retrieve modes, which collectively work towards enhancing the reasoning process. These paradigms foster task-agnostic approaches that enable LLMs to “think outside the box,” thereby overcoming common errors and providing better solutions. Through extensive experiments across four different types of reasoning tasks, we demonstrate that orchestrating multiple LLMs to work in concert yields better results compared to existing strong methods. Further analysis reveals the cost-effectiveness of our method, while also exploring synergies between models of various scales and promoting annotation efficiency.
Paper Type: long
Research Area: NLP Applications
Contribution Types: NLP engineering experiment, Approaches low compute settings-efficiency
Languages Studied: English, Programming Languages
0 Replies

Loading