MoPE: A Massive Mixture of Passage-Level Experts for Knowledge Editing in Open-Domain Question Answering

ACL ARR 2025 February Submission8494 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: As world knowledge continues to evolve, adapting LLMs to new knowledge is crucial, however, it poses significant challenges, as naively fine-tuning the entire model often leads to catastrophic forgetting and high computational costs. While RAG and model editing have been increasingly studied for knowledge adaptation, this paper moves beyond the `RAG vs. fine-tuning' discussion to explore the `\textit{RAG vs. model editing}' issue, and propose a ``Massive Mixture of Experts (MMoE)’’ approach for model editing, referred to as \textbf{MoPE}, i.e., Massive Mixture of Passage-Level Experts, which consists of the key components at training and inference stages: (1) \textbf{Massive passage-level editing with MMoE}, where a large set of passage-level experts is created using automatically generated question-answer pairs for each passage, and (2) \textbf{Retrieval-based routing with MMoE}, which employs dense retrieval to select the top-$k$ passage-level experts without requiring additional training. Experimental results demonstrate that MoPE outperforms a naively designed variant of RAG, i.e., direct RAG, and when combined with direct RAG, it surpasses an advanced variant of RAG, significantly improving over LoRA-based parameter-efficient tuning methods. Our data and code will be available at \url{https://github.com/XXX/XXX}.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: paragraph-level model editing, mixture of expert, open-domain question-answering
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 8494
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview