Knowledge Tagging on Math Questions via LLMs with Flexible Sequential Demonstration Retriever

ACL ARR 2024 August Submission174 Authors

15 Aug 2024 (modified: 22 Sept 2024)ACL ARR 2024 August SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Knowledge tagging for questions plays a crucial role in intelligent educational applications. Traditionally, these annotations are always conducted by pedagogical experts, as the task requires deep insights into connecting question-solving logic with corresponding knowledge concepts. With the recent emergence of advanced text encoding algorithms, such as pre-trained language models (PLMs), many researchers have developed automatic knowledge tagging systems based on deep semantic embeddings. In this paper, we explore automating the task using Large Language Models (LLMs), in response to the inability of prior encoding-based methods to deal with the hard cases which involve strong domain knowledge and complicated concept definitions. By showing the strong performance of zero- and few-shot results over math questions knowledge tagging tasks, we demonstrate LLMs' great potential in conquering the challenges faced by prior methods. Furthermore, by proposing a reinforcement learning-based demonstration retriever, we successfully exploit the great potential of different-sized LLMs in achieving better performance results while keeping the in-context demonstration usage efficiency high
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: AI in Education, Large Language Model, Reinforcement Learning
Contribution Types: NLP engineering experiment, Data resources, Data analysis
Languages Studied: English
Submission Number: 174
Loading