One LLM Does Not Simulate All Students: Ability-Aware Student Simulation via Cognitive Diagnosis Guided LLM Assignment
Keywords: Large Language Models, Student Behavior Simulation, Cognitive Diagnosis, Ability-Aware Model Assignment, Simulation Bias
Abstract: Large Language Models (LLMs) have become integral to personalized education systems, particularly in the realm of student behavior simulation. By predicting fine-grained learning behaviors, these simulations enable intelligent systems to provide tailored instructional support. However, most existing methods rely on a single high-capacity LLM to represent an entire population of diverse learners. In this work, we demonstrate that this “one-size-fits-all” approach induces a systematic *ability-dependent bias*, where high-capacity models tend to overestimate low-ability students while lower-capacity models underestimate high-ability ones. To mitigate this distortion, we propose an **ability-aware student simulation framework** that dynamically matches students with appropriate LLM backbones through cognitive alignment. We leverage Neural Cognitive Diagnosis (NeuralCD) to extract multidimensional cognitive profiles for both human students and LLM agents within a shared skill space, subsequently pairing each student with the most cognitively representative model. Extensive experiments demonstrate that our approach substantially reduces simulation bias and consistently outperforms single-model baselines across the entire proficiency spectrum. Our findings suggest that faithful behavior simulation necessitates the **alignment of model capacity with student ability**, establishing cognitive diagnosis as a principled mechanism for model assignment in educational AI.
Paper Type: Long
Research Area: AI/LLM Agents
Research Area Keywords: LLM agents, safety and alignment for agents, agent evaluation, agent simulation
Contribution Types: Model analysis & interpretability, Position papers
Languages Studied: English
Submission Number: 10052
Loading