Keywords: Human-subject application-grounded evaluations, Linguistic theories, Security and privacy
TL;DR: Examination of how Large Language Models retrieve memory, hypothesizing that attention layers reflect psychological principle.
Abstract: While explainable artificial intelligence (XAI) for large language models (LLMs)
remains an evolving field with many unresolved questions, increasing regulatory
pressures have spurred interest in its role in ensuring transparency,
accountability, and privacy-preserving machine unlearning. Despite recent
advances in XAI have provided some insights, the specific role of attention
layers in transformer-based LLMs remains underexplored.
This study investigates the memory mechanisms instantiated by attention layers, drawing on prior research in psychology and computational psycholinguistics that links Transformer attention to cue-based retrieval in human memory.
In this view, queries encode the retrieval context, keys index candidate memory
traces, attention weights quantify cue–trace similarity, and values carry the
encoded content, jointly enabling the construction of a context representation
that precedes and facilitates memory retrieval.
Guided by the Encoding Specificity Principle, we hypothesize that the cues used in the initial stage of retrieval are instantiated as keywords. We provide converging evidence for this keywords-as-cues hypothesis.
In addition, we isolate neurons within attention layers whose activations selectively encode and facilitate the retrieval of context-defining keywords.
Consequently, these keywords can be extracted from identified neurons and further contribute to downstream applications such as unlearning.
Primary Area: interpretability and explainable AI
Submission Number: 24319
Loading