Abstract: Knowledge-editing updates knowledge of large language models (LLMs) and contributes to the interpretability and application of LLMs. However, knowledge applying is context-consistent: LLMs can recall the same knowledge in different contexts. Existing works ignore this property and the editing lacks generalization. Based on empirical evidence, we have observed that the effect of different contexts in recalling the same knowledge follows a Gaussian-like distribution. Hence, when editing LLMs, we sample Gaussian noises to simulate the effect of different contexts rather than requiring real contexts. We make LLMs see the unseen contexts where edited knowledge will be applied, thereby improving editing generalization. Experimental results on three LLMs demonstrate the effectiveness of our method and distinguish ours from the others of fine-tuning LLMs via noises.
Paper Type: long
Research Area: Efficient/Low-Resource Methods for NLP
Contribution Types: Model analysis & interpretability, Approaches to low-resource settings
Languages Studied: English
0 Replies
Loading