MKE-PLLM: A benchmark for multilingual knowledge editing on pretrained large language model

Published: 01 Jan 2025, Last Modified: 13 Oct 2025Neurocomputing 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Highlights•Categorizing multilingual knowledge editing challenge in LLMs into linguistic and knowledge bias. And we deep investigate these two issues by a proposed multilingual editing method.•Introducing a multilingual knowledge editing benchmark with two subdatasets. One dataset has many multilingual knowledge editing datasets, while the other has different language versions of a specific piece of knowledge.•Conducting comprehensive experiments encompassing quantitative, qualitative, and visual analysis reveals a diverse array of phenomena arising during the editing process.
Loading