Abstract: Large language models (LLMs) are transforming education by answering questions, explaining complex concepts, and generating content across a wide range of subjects. However, despite strong performance on academic benchmarks, they often fail to adapt responses to students’ grade levels. This is a critical need in K–12 education, where age-appropriate vocabulary and explanation are essential for effective learning. Existing models frequently produce outputs that are too advanced or vague for younger learners, and there are no standardized benchmarks to evaluate their ability to adjust across cognitive and developmental stages. To address this gap, we introduce a benchmark of nearly 48k grade-labeled QA pairs across 9 science subjects, spanning Grades 1–12 and grouped into four grade levels. We evaluate a diverse set of open-source LLMs and find that while larger models generally perform better, they still struggle with generating suitable responses for early-grade students (Grades 1–5). Our work presents the first dataset and evaluation framework for assessing grade-level adaptability in LLMs, aiming to foster more developmentally aligned educational AI systems through better training and prompting strategies. EduAdapt's code and datasets are open-sourced and publicly available at [URL redacted].
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: open-domain QA, LLM Efficiency, question generation, cognitive modeling, generative models
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data resources, Data analysis
Languages Studied: English
Submission Number: 4509
Loading