Self-KT: Self-attentive Knowledge Tracing with Feature Fusion Pre-training in Online Education

Published: 01 Jan 2024, Last Modified: 12 Jan 2025IJCNN 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The goal of the Knowledge Tracing (KT) task is to accurately predict a student’s aptitude in answering the next question based on their previous responses. Recent studies have shown promising results by employing pre-training models to capture general feature representations between questions and skills, and subsequently fine-tuning these models for the KT task. However, these methods still face challenges in accurately representing question difficulty and fail to consider the impact of feature fusion during pre-training. Additionally, existing models do not effectively harness the high-level semantic information available after pre-training during fine-tuning, resulting in an underutilization of their potential applications. To this end, this paper proposes Self-attentive Knowledge Tracing (Self-KT) with Feature Fusion Pre-training in the Online Education domain to address these challenges. Self-KT introduces a novel representation of question difficulty and innovatively implements dynamic feature fusion to obtain question embeddings. Furthermore, it enhances the self-attention mechanism by considering the influence of subsequent questions on the current question. We implemented Self-KT on multiple publicly available datasets, and the results demonstrated its significant superiority over the current state-of-the-art methods in knowledge tracing.
Loading