Integrating fine-grained attention into multi-task learning for knowledge tracingDownload PDFOpen Website

Published: 01 Jan 2023, Last Modified: 04 Nov 2023World Wide Web (WWW) 2023Readers: Everyone
Abstract: Knowledge Tracing (KT) refers to the task of modeling learners’ various knowledge state given their past performance in e-learning platforms. Existing KT models usually only leverage the response (correct or incorrect) feedback generated by learners in the process of exercise-making, thus making them inaccurate and imprecise for capturing the knowledge growth after each exercise-making. Some researchers try to jointly learn hint-taking and response predictions with multi-task learning, but only achieve a limited improvement due to the imprecision of related task’s feedback and the rigid fusion of multi-task features. This paper proposes Multi-task Attentive Knowledge Tracing (MAKT) that jointly learns hint-taking and attempt-making predictions simultaneously with response prediction. Two specific models in MAKT are proposed, including a Bi-task Attentive Knowledge Tracing model (BAKT) and a Tri-task Attentive Knowledge Tracing model (TAKT). BAKT jointly learns a single related task with response prediction by considering two fine-grained attention mechanisms: imbalance-aware attention mechanism and skill-aware attention mechanism. The former is designed to address the inherent problem of imbalanced exercise samples in KT. The latter realizes skill individualization in both stages of multi-task features fusion and multi-model features fusion. TAKT jointly learns two related tasks simultaneously with response prediction based on the skill-aware attention mechanism, which has the potential to be extended by integrating more related tasks. Experiments on several real-world benchmark datasets show that MAKT outperforms state-of-the-art KT methods on predicting future learner responses, which also indicates a bright outlook for combining KT with multi-task learning.
0 Replies

Loading