DEPTRAI: Detachable External‐memory layer for Parameter-Transformer Injection

ICLR 2026 Conference Submission22659 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Model, Knowledge Editing, Model Editing
TL;DR: DEPTRAI is a detachable external-memory layer for LLMs that stores edits as key–value facts outside the model
Abstract: Large language models (LLMs) quickly become outdated because the factual knowledge they encode is fixed at training time, and retraining for every new fact is prohibitively expensive. Prior ``internal'' editors apply closed-form perturbations directly to the feed-forward weights, but each new patch is applied in place to the base model, causing edits to accumulate, interfere, and preventing straightforward revocation. We present DEPTRAI—\textbf{D}etachable \textbf{E}xternal-memory layer for \textbf{P}arameter-\textbf{Tra}nsformer \textbf{I}njection—that stores each edited fact as a key–value tuple outside the model, leaving all original weights frozen. At inference, the frozen FFN produces a subject key, which is routed to the nearest stored key using a Mahalanobis metric that mirrors the inverse-covariance scaling of closed-form editors. A lightweight gate then either substitutes the edited value or preserves the base projection. This design turns factual patching into a reversible database-style update rather than a permanent modification of parameters. DEPTRAI achieves the highest average performance on sequential editing tasks, outperforming the latest dual-memory method WISE by \textbf{15–20\%},
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 22659
Loading