Keywords: Large Language Models, Cognitive Architecture, Internal Reasoning, Multi-turn Dialogue, Cognitive AI, Human-AI Interaction, Internal Monologue, Conversational LLMs
TL;DR: MIRROR's modular architecture enables open-source models to surpass proprietary systems in personalized safety for a fraction of the cost, democratizing access to AI that maintains user-specific critical information across multi-turn dialogue.
Abstract: Large language models frequently generate harmful recommendations in personal multi-turn dialogue by ignoring user-specific safety context, exhibiting sycophantic agreement, and compromising user safety for larger group preferences. We introduce MIRROR, a modular production-focused architecture that prevents these failures through a persistent, bounded internal state that preserves personal conversational information across conversational turns. Our dual-component design inspired by Dual Process Theory separates immediate response generation (Talker) from asynchronous deliberative processing (Thinker), which synthesizes parallel reasoning threads between turns with marginal latency. On the CuRaTe personalized safety benchmark, MIRROR-augmented models achieve a 21\% average relative improvement (69\% to 84\% average absolute improvement) across seven diverse frontier models, with open-source Llama 4 and Mistral 3 variants surpassing both GPT-4o and Claude 3.7 Sonnet at only \$0.0028 to 0.0172 additional cost per turn, narrowing the gap between affordable open-source models to frontier systems in the safety space. The modular architecture enables flexible deployment: full internal processing for affordable models or single-component configurations for expensive systems, democratizing access to safer, personalized AI.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 24492
Loading