Question Difficulty Consistent Knowledge Tracing

Published: 01 Jan 2024, Last Modified: 10 Jan 2025WWW 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Knowledge tracing aims to estimate knowledge states of students over a set of skills based on students' past learning activities. Deep learning based knowledge tracing models show superior performance to traditional knowledge tracing approaches. Early works like DKT use skill IDs and student responses only. Recent works also incorporate questions IDs into their models and achieve much improved performance in the next question correctness prediction task. However, predictions made by these models are thus on specific questions, and it is not straightforward to translate them to estimation of students' knowledge states over skills. In this paper, we propose to replace question IDs with question difficulty levels in deep knowledge tracing models. The predictions made by our model can be more readily translated to students' knowledge states over skills. Furthermore, by using question difficulty levels to replace question IDs, we can also alleviate the cold-start problem in knowledge tracing as online learning platforms are updated frequently with new questions. We further use two techniques to smooth the predicted scores. One is to combine embeddings of nearby difficulty levels using the Hann function. The other is to constrain the predicted probabilities to be consistent with question difficulties by imposing a penalty if they are not consistent. We conduct extensive experiments to study the performance of the proposed model. Our experimental results show that our model outperforms the state-of-the-art knowledge tracing models in terms of both accuracy and consistency with question difficulty levels.
Loading