GenPoE: Generative Passage-level Mixture of Experts for Knowledge Enhancement of LLMs

ACL ARR 2025 May Submission7867 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Typically, parametric adaptation methods such as domain-adaptive pretraining (DAP) and retrieval-augmented generation (RAG) have been considered effective approaches for adapting large language models (LLMs) to new knowledge or domains. To unify positive effects of parametric adaptation and RAG, this paper proposes **GenPoE**, i.e., "generative" passage-level mixture of experts (MoEs) for enhancing knowledge of LLMs. The key component is its novel *MoE-generating hypernetwork* which takes in-context retrieved passages and generates their "expert’’ parameters, where these generated parameters are then integrated into LLMs by forming expert networks. With its use of "generated" parameters, GenPoE does not require a separate parameter training or finetuning stage, which is often costly. By parameterizing passages into expert networks, GenPoE likely exhibits robustness even when the retrieved passages are irrelevant. Experiment results in two open-domain question answering (QA) tasks present that GenPoE shows improved performances over other passage-level knowledge editing, and its combination of RAG produces superior performances over RAG. Our data and code will be available at \url{https://github.com/XXX/XXX}.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: Question Answering, Efficient/Low-Resource Methods for NLP, Information Extraction
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 7867
Loading