One Step at a Time: Progressive Multi-Step Reasoning with LLMs for Automatic Knowledge Tagging

ACL ARR 2025 February Submission611 Authors

10 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Knowledge tagging is a fundamental task in intelligent education, associating educational materials with the most pertinent knowledge concepts. However, in practical scenarios, most existing methods have encountered bottlenecks due to the expertise and confusion of knowledge concepts. In this paper, we propose LLM4KTS, which achieves a progressive multi-step reasoning paradigm that fully introduces the reasoning ability of Large language models (LLMs) for knowledge tagging tasks. To build LLM4KTS, we first construct a multi-step reasoning dataset with gradual thinking and reasoning. LLM4KT is then fine-tuned on the dataset to align the LLMs with processive reasoning. Then, we introduce a step-level score preference optimization (SSPO) method to fine-tune the LLM4KT further to improve the effectiveness and quality of reasoning processes. Moreover, we apply a scoring model to expand the inference scaling and guide the decoding process. Extensive experiments verify that LLM4KTS achieves significant improvements in the knowledge tagging performance, outperforming current methods.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: educational applications,optimization methods
Languages Studied: English
Submission Number: 611
Loading