Using LLMs to simulate students' responses to exam questionsDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
TL;DR: This paper studies the feasibility of using LLMs to answer exam questions simulating the responses of students of different levels.
Abstract: Previous research showed that Large Language Models (LLMs) can be leveraged in numerous ways in the educational domain and, in this work, we study if they can be used to answer exam questions simulating students of different skill levels. From an educational perspective, this could enable to automatically evaluate learning and exam content and, from a computational linguistics perspective, it could help in understanding the learning process and knowledge of LLMs. By experimenting on three publicly available datasets, we show that it is indeed possible to prompt LLMs to simulate students of different skill levels using abstract scales, and share a prompt that proved effective in two different educational domains. We also show that, although the prompt generalises to different datasets, it does not generalise to different LLMs, and the LLMs do not seem capable to easily simulate students at specific levels of standardised educational scales.
Paper Type: long
Research Area: NLP Applications
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
0 Replies

Loading