Large Language Models are Students at Various Levels: Zero-shot Question Difficulty Estimation

ACL ARR 2024 June Submission2300 Authors

15 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recent advancements in educational platforms have emphasized the importance of personalized education. Accurately estimating question difficulty based on the group level of a student is essential for personalized question recommendations. Several studies have focused on predicting question difficulty using student question-solving records or textual information about the questions. However, these approaches require a large amount of student question-solving records and fail to account for the subjective difficulties perceived by different student groups. To address these limitations, we propose the LLaSA framework that utilizes large language models to represent students at various levels. LLaSA estimates question difficulty using student abilities derived from their question-solving records. Furthermore, the zero-shot LLaSA can estimate question difficulty without any student question-solving records. In evaluations on the DBE-KT22 and ASSISTMents 2005–2006 benchmarks, the zero-shot LLaSA demonstrated a performance comparable to those of strong baseline models even without any training. When evaluated using the classification method, LLaSA outperformed the baseline models, achieving state-of-the-art performance. In addition, the zero-shot LLaSA achieved a high correlation compared with the question difficulty derived from the question-solving records of students, suggesting the potential of LLaSA for real world applications.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: educational applications, NLP in resource-constrained settings, inference methods, prompting
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English
Submission Number: 2300
Loading