Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Building effective deep neural networks one feature at a time
Martin Mundt, Tobias Weis, Kishore Konda, Visvanathan Ramesh
Feb 15, 2018 (modified: Feb 15, 2018)ICLR 2018 Conference Blind Submissionreaders: everyoneShow Bibtex
Abstract:Successful training of convolutional neural networks is often associated with suffi-
ciently deep architectures composed of high amounts of features. These networks
typically rely on a variety of regularization and pruning techniques to converge
to less redundant states. We introduce a novel bottom-up approach to expand
representations in fixed-depth architectures. These architectures start from just a
single feature per layer and greedily increase width of individual layers to attain
effective representational capacities needed for a specific task. While network
growth can rely on a family of metrics, we propose a computationally efficient
version based on feature time evolution and demonstrate its potency in determin-
ing feature importance and a networks’ effective capacity. We demonstrate how
automatically expanded architectures converge to similar topologies that benefit
from lesser amount of parameters or improved accuracy and exhibit systematic
correspondence in representational complexity with the specified task. In contrast
to conventional design patterns with a typical monotonic increase in the amount of
features with increased depth, we observe that CNNs perform better when there is
more learnable parameters in intermediate, with falloffs to earlier and later layers.
TL;DR:A bottom-up algorithm that expands CNNs starting with one feature per layer to architectures with sufficient representational capacity.