Keywords: Curriculum Learning, Model Training, Problem Difficulty, Skill Level, Chess, Mathematics
Abstract: Curriculum learning, ordering training examples in a sequence based on difficulty, takes inspiration from human learning but has not gained widespread acceptance. Static strategies for scoring item difficulty produce curricula that are not specific to the learner at hand, and that rely on indirect proxy scores of varying quality. Dynamic approaches base difficulty estimates on gradient information, requiring considerable extra computation during training. We introduce a novel method for measuring the difficulty of individual problem instances directly relative to the ability of a given model, and identify transitional problems that are consistently easier as model ability increases. Applying this method to chess and mathematics, we find that training on appropriately calibrated problems most efficiently "levels up" a model to the next competence tier. These problems induce a natural progression from easier to harder items, which outperforms other training strategies. By measuring difficulty directly relative to model competence, our method yields interpretable transition problems, learner-specific curricula, and a principled basis for step-by-step improvement.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 23467
Loading