TwinVoice: A Multi-dimensional Benchmark Towards Digital Twins via LLM Persona Simulation

ACL ARR 2026 January Submission1708 Authors

31 Dec 2025 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Persona Simulation, Digital Twins, Benchmarking, Computational Social Science, Evaluation Methodologies, Psychology of LLMs
Abstract: Large Language Models (LLMs) are exhibiting emergent human-like abilities and are envisioned as the tool for simulating an individual's communication patterns, behaviors, and personality traits. However, current evaluations of LLM-based persona simulation remain limited: most rely on synthetic dialogues and lack fine-grained analysis of the capability for persona simulation. To address these limitations, we introduce TwinVoice, a comprehensive benchmark for assessing persona simulation across diverse real-world contexts. TwinVoice encompasses three dimensions: Social Persona (public social interactions), Interpersonal Persona (private dialogues), and Narrative Persona (role-based expression). It further decomposes the evaluation into six fundamental capabilities, including opinion consistency, memory recall, logical reasoning, lexical fidelity, persona tone, and syntactic style. Experimental results reveal that while advanced models achieve moderate accuracy in persona simulation, they still fall short of capabilities such as syntactic style and memory recall. Our data, code, and evaluation results are available.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: Benchmarking, Datasets, Evaluation Methodologies, Large Language Models, Dialogue Systems
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data resources
Languages Studied: Chinese, English, Spanish, Portuguese, Russian
Submission Number: 1708
Loading