GroSS Decomposition: Group-Size Series Decomposition for Whole Search-Space Training

Anonymous

Sep 25, 2019 ICLR 2020 Conference Blind Submission readers: everyone Show Bibtex
  • Keywords: architecture search, block term decomposition, network decomposition, network acceleration, group convolution
  • TL;DR: A decomposition method which allows for simultaneous training of an entire search space of group convolution architectures.
  • Abstract: We present Group-size Series (GroSS) decomposition, a mathematical formulation of tensor factorisation into a series of approximations of increasing rank terms. GroSS allows for dynamic and differentiable selection of factorisation rank, which is analogous to a grouped convolution. Therefore, to the best of our knowledge, GroSS is the first method to simultaneously train differing numbers of groups within a single layer, as well as all possible combinations between layers. In doing so, GroSS trains an entire grouped convolution architecture search-space concurrently. We demonstrate this with a proof-of-concept exhaustive architecure search with a performance objective. GroSS represents a significant step towards liberating network architecture search from the burden of training and finetuning.
0 Replies

Loading