Scheduled Learning with Declining Diversity and Incremental Difficulty

Anonymous

Nov 03, 2017 (modified: Nov 03, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: We study how to adaptively select training subsets for different stages of iterative machine learning. We introduce minimax curriculum learning (MCL), which trains a model on a diverse few samples at first, and then later on a larger training set containing concentrated hard samples, thereby avoiding wasted efforts on redundant samples in early stages and on disperse outliers in later stages. At each stage, model weights and training sets are updated by solving a minimax optimization, whose objective is composed of a loss (reflecting the hardness of the training set) and a submodular regularization (measuring its diversity). MCL repeatedly solves a sequence of such optimizations with decreasing diversity and increasing training set size. Unlike the expensive alternative minimization used in previous work, we reduce MCL to minimization of a surrogate function that can be handled by submodular maximization and optimized by gradient methods. We show that MCL achieves better performance by using fewer labeled samples for both shallow and deep models.
  • TL;DR: Scheduling a learning process with decreasing diversity and increasing difficulty improves the performance and requires less training samples.
  • Keywords: deep learning, minimax, curriculum learning, submodular, diversity

Loading