Fusion-$\mathcal{X}$: Advancing LLM Ability with Adaptive Heterogeneous Model Integration

24 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Model integration, Large language models, Knowledge Interference
TL;DR: We propose a dynamic LLM integration framework, which consists of an Adaptive Selection Network and a Dynamic Weighted Fusion mechanism, preventing knowledge interference and enhancing overall model performance.
Abstract: Training LLMs presents significant challenges, including data access, privacy concerns, the complexity of training schedules, and limited resources. Therefore, a more accessible approach involves integrating existing LLMs, each tailored for different tasks or trained on distinct datasets, into an advanced and robust model with enhanced capabilities. Popular methods like ensemble and weight merging require substantial memory and struggle to adapt to changing data environments. Recent efforts have aimed to transfer only the collective knowledge of multiple LLMs to the target LLM. However, the resulting fused model often suffers from interference and performance degradation due to a lack of flexibility in the fusion process, including candidate selection and training pipeline. To address these issues, we propose a dynamic fusion framework to adaptively select LLMs for integration. Specifically, to diminish knowledge interference during LLM fusion, we introduce an adaptive selection network. It is a learnable mechanism that explicitly evaluates and selects the best-performing source LLMs based on their rewards, allowing us to fuse knowledge from a flexible number model candidates. To improve the knowledge fusion process, we propose a dynamic weighted fusion strategy that considers the intrinsic characteristics of candidate LLMs during fusion. Additionally, we incorporate a feedback-driven loss function to prevent the selector from converging to a state where it consistently assigns the same candidates. Our experiments demonstrate that our method consistently enhances model performance across multiple benchmarks, yielding an improvement of up to 2.2\%. Additionally, our approach achieves a notable reduction in knowledge interference, showing up to 50\% decrease compared to existing work.
Supplementary Material: pdf
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3388
Loading