Synergistic Weak-Strong Collaboration by Aligning Preferences

Published: 10 Oct 2024, Last Modified: 19 Nov 2024AFM 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Weak-Strong Model Collaboration, Preferences Tuning, Large Language Model
TL;DR: We propose a synergistic collaboration framework where a smaller, specialized model and a larger, general-purpose model work together, using preference finetuning to enhance problem-solving in specialized tasks.
Abstract: Current Large Language Models (LLMs) demonstrate exceptional general reasoning and problem-solving abilities but often struggle with specialized tasks or domains requiring proprietary information due to their generalized training and size constraints. Fine-tuning large models for every specific domain is impractical because of inaccessibility to black-box model parameters and high computational costs. We explore a solution to this challenge: can a collaborative framework between a specialized weak model and a general strong model effectively extend LLMs' capabilities to niche but critical tasks? We propose a dynamic interaction where the weak model, tailored to specific domains, generates detailed initial drafts and background information, while the strong model refines and enhances these drafts using its advanced reasoning skills. To optimize this collaboration, we introduce a feedback loop by fine-tuning the weak model based on the strong model's preferences, fostering an adaptive and synergistic relationship. We validate our framework through experiments on three datasets. We find that the collaboration significantly outperforms each model alone by leveraging complementary strengths. Moreover, fine-tuning the weak model with strong model's preference further enhances overall performance. Our collaborative approach achieves an average F1 score improvement of 3.24% over the weak model alone and 12.17% over the strong model alone across all benchmarks.
Submission Number: 98
Loading