Keywords: LLM agents, agent memory, Multi-user large language models;
Abstract: Long-context LLM agents increasingly serve multiple users or personas within a single session, requiring stable identity and knowledge boundaries under frequent switching.
We identify a common failure mode, identity drift, where models conflate user-specific states and leak information across roles.
On BEAM-Switch, a benchmark for controlled multi-user switching, performance consistently degrades as switching intensifies, even when responses remain fluent and locally coherent.
We propose Mentor, a cognitive architecture that mitigates identity drift without fine-tuning.Mentor uses a Dual-Chain Memory Mechanism: a Global Chain ($\mathcal{G}$) for long-term event logging and isolated Role Chains ($\mathcal{R}_r$) as per-role working memories, supported by a semantic Knowledge Graph ($\mathcal{K}$) that filters and verifies role-admissible information before generation.Across six LLM families, Mentor improves the overall score (Avg) from 0.46 to 0.75 on average (+0.29 absolute), with substantial gains in identity adherence and knowledge fidelity.
Paper Type: Long
Research Area: AI/LLM Agents
Research Area Keywords: LLM agents,agent memory
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches to low-resource settings, Approaches low compute settings-efficiency, Publicly available software and/or pre-trained models, Data resources, Data analysis, Theory
Languages Studied: English
Submission Number: 7635
Loading