Keywords: natural language processing, large language model, multi-agent systems, GraphRAG, student modeling
Abstract: In multi-agent teaching simulation scenarios, large language models (LLMs) exhibit an inherent assistant-oriented bias, leading them to generate overly advanced responses when acting as student agents, which limits their ability to accurately reflect real students’ cognitive states and learning behaviors. To address this limitation, we model students’ cognitive graphs and propose GraphLR-MPP, a Graph-structured Learning Report Enhanced Math Performance Predictor, which leverages GraphRAG-generated learning reports to train the model for predicting students' math problem-solving performance. Experimental results demonstrate that our method outperforms existing in-context learning (ICL) approaches and other supervised fine-tuning (SFT) methods. Furthermore, we introduce a Multi-Agent Teaching Intervention Trial that simulates the dynamic updating of students’ cognition under instructional interventions, providing a scalable foundation for future agent-based teaching simulation experiments.
Paper Type: Long
Research Area: AI/LLM Agents
Research Area Keywords: LLM agents, multi-agent systems, agent coordination and negotiation
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 7079
Loading