Understanding the role of importance weighting for deep learningDownload PDF

28 Sep 2020 (modified: 25 Jan 2021)ICLR 2021 SpotlightReaders: Everyone
  • Keywords: Importance Weighting, Deep Learning, Implicit Bias, Gradient Descent, Learning Theory
  • Abstract: The recent paper by Byrd & Lipton (2019), based on empirical observations, raises a major concern on the impact of importance weighting for the over-parameterized deep learning models. They observe that as long as the model can separate the training data, the impact of importance weighting diminishes as the training proceeds. Nevertheless, there lacks a rigorous characterization of this phenomenon. In this paper, we provide formal characterizations and theoretical justifications on the role of importance weighting with respect to the implicit bias of gradient descent and margin-based learning theory. We reveal both the optimization dynamics and generalization performance under deep learning models. Our work not only explains the various novel phenomenons observed for importance weighting in deep learning, but also extends to the studies where the weights are being optimized as part of the model, which applies to a number of topics under active research.
  • One-sentence Summary: We study the theoretical properties of importance weighting for deep learning.
  • Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
  • Supplementary Material: zip
11 Replies

Loading