The Flip Side of the Reweighted Coin: Duality of Adaptive Dropout and RegularizationDownload PDF

21 May 2021, 20:51 (edited 24 Jan 2022)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: sparsity, deep learning, dropout, regularization, adaptive
  • TL;DR: We prove a duality between adaptive dropout sparsity methods and subquadratic regularization penalties.
  • Abstract: Among the most successful methods for sparsifying deep (neural) networks are those that adaptively mask the network weights throughout training. By examining this masking, or dropout, in the linear case, we uncover a duality between such adaptive methods and regularization through the so-called “η-trick” that casts both as iteratively reweighted optimizations. We show that any dropout strategy that adapts to the weights in a monotonic way corresponds to an effective subquadratic regularization penalty, and therefore leads to sparse solutions. We obtain the effective penalties for several popular sparsification strategies, which are remarkably similar to classical penalties commonly used in sparse optimization. Considering variational dropout as a case study, we demonstrate similar empirical behavior between the adaptive dropout method and classical methods on the task of deep network sparsification, validating our theory.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code:
11 Replies