Keywords: key-value neural memories, feed-forward networks
TL;DR: We empirically find that updating keys within FFNs yields better performance than updating values when tuning LLMs, indicating updating the mechanism of how the model controls the knowledge may be more effective than directly modifying knowledge
Abstract: The feed-forward networks (FFNs) in transformers are recognized as a group of key-value neural memories to restore abstract high-level knowledge.
In this work, we conduct an empirical ablation study on updating keys (the 1st layer in the FFNs layer) or values (the 2nd layer in the FFNs layer).
We compare those two methods in various knowledge editing and fine-tuning tasks of large language models to draw insights to understand FFNs further. Code is available at \href{https://github.com/qiuzh20/Tuning-keys-v.s.-values}{this repo}.
Supplementary Material: zip
Submission Number: 237
Loading