Abstract: Knowledge tracing (KT), which estimates and traces the degree of learners' mastery of concepts based on students' responses to learning resources, has become an increasingly relevant problem in intelligent education. The accuracy of predictions greatly depends on the quality of question representations. While contrastive learning has been commonly used to generate high-quality representations, the selection of positive and negative samples for knowledge tracing remains a challenge. To address this issue, we propose an adversarial bootstrapped question representation (ABQR) model, which can generate robust and high-quality question representations without requiring negative samples. Specifically, ABQR introduces the bootstrap self-supervised learning framework, which learns question representations from different views of the skill-informed question interaction graph and facilitates question representations between each view to predict one another, thereby circumventing the need for negative sample selection. Moreover, we propose a multi-objective multi-round feature adversarial graph augmentation method to obtain a higher-quality target view, while preserving the structural information of the original graph. ABQR is versatile and can be easily integrated with any base KT model as a plug-in to enhance the quality of question representation. Extensive experiments demonstrate that ABQR significantly improves the performance of the base KT model and outperforms state-of-the-art models. Ablation experiments confirm the effectiveness of each module of ABQR. The code is available at https://github.com/lilstrawberry/ABQR.
Loading