Interpretability-based Tailored Knowledge Editing in Transformers

Published: 03 Oct 2024, Last Modified: 27 Sept 2024EMNLP 2024EveryoneCC BY 4.0
Abstract: Language models, recognized as a new form of knowledge bases, face challenges of outdated, erroneous, and privacy-sensitive information, necessitating knowledge editing to rectify errors without costly retraining. Existing methods, spanning parameter modification, external knowledge integration, and in-context learning, lack in-depth analysis from a model interpretability perspective. Our work explores the instability in in-context learning outcomes, providing insights into its reasons and distinctions from other methods. Leveraging findings on the critical role of feed-forward MLPs in decoder-only models, we propose a tailored knowledge editing method that considers the unique information flow of each sample. Model interpretability reveals diverse attribute recall across transformer layers, guiding edits to specific features at different depths and mitigating over-editing issues. Beyond parameter-based tailored editing, our method introduces diverse structures during editing, simulating varied knowledge manifestations in training, resulting in improved model performance and enhanced knowledge retention.
Loading