Track: Search and retrieval-augmented AI
Keywords: Knowledge Editing, Retrieval-Augmented Generation, Large Language Model
Abstract: The rapid evolution of the Web as a key platform for information dissemination has led to the growing integration of large language models (LLMs) in Web-based applications. However, the swift changes in web content present challenges in maintaining these models' relevance and accuracy. The task of Knowledge Editing (KE) is aimed at efficiently and precisely adjusting the behavior of large language models (LLMs) to update specific knowledge while minimizing any adverse effects on other knowledge. Current research predominantly concentrates on editing white-box LLMs, neglecting a significant scenario: editing black-box LLMs, where access is limited to interfaces and only textual output is provided. In this paper, we initially officially introduce KE on black-box LLMs, followed by presenting a thorough evaluation framework. This framework operates without requiring logits and considers pre- and post-edit consistency, addressing the limitations of current evaluations that are inadequate for black-box LLMs editing and lack comprehensiveness. To address privacy leaks of editing data and style over-editing in existing approaches, we propose a new postEdit framework. postEdit incorporates a retrieval mechanism for editing knowledge and a purpose-trained editing plugin called post-editor, ensuring privacy through downstream processing and maintaining textual style consistency via fine-grained editing. Experiments and analysis conducted on two benchmarks show that postEdit surpasses all baselines and exhibits robust generalization, notably enhancing style retention by an average of +20.82\%. We will release our code after blind review.
Submission Number: 2345
Loading