Can LLMs Simulate Personas with Reversed Performance? A Systematic Investigation for Counterfactual Instruction Following in Math Reasoning Context

Published: 24 Jul 2025, Last Modified: 01 Aug 2025Social Sim'25EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Counterfactual Instruction Following, Persona Simulation, Math Reasoning
TL;DR: We investigate the capability of LLMs on a counterfactual instruction following task, which requires the models to simulate personas with reversed performance, such as students with low proficiency in math reasoning.
Abstract: Large Language Models (LLMs) are now increasingly widely used to simulate personas in virtual environments, leveraging their instruction-following capability. However, we discovered that even state-of-the-art LLMs cannot simulate personas with reversed performance (e.g., student personas with low proficiency in educational settings), which impairs the simulation diversity and limits the practical applications of the simulated environments. In this work, using mathematical reasoning as a representative scenario, we conduct, to our best knowledge, the first study to evaluate LLMs on simulating personas with reversed performance, a capability that we dub "counterfactual instruction following". We evaluate both open-weight and closed-source LLMs on this task and find that LLMs, including the OpenAI o1 reasoning model, all struggle to follow counterfactual instructions for simulating reversedly performing personas. Intersectionally simulating both the performance level and the race population of a persona worsens the effect even further. These results highlight the challenges of counterfactual instruction following and the need for further research.
Submission Number: 15
Loading