Regularizing Neural Networks by Stochastically Training Layer Ensembles

Published: 01 Jan 2019, Last Modified: 27 Sept 2024CoRR 2019EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Dropout and similar stochastic neural network regularization methods are often interpreted as implicitly averaging over a large ensemble of models. We propose STE (stochastically trained ensemble) layers, which enhance the averaging properties of such methods by training an ensemble of weight matrices with stochastic regularization while explicitly averaging outputs. This provides stronger regularization with no additional computational cost at test time. We show consistent improvement on various image classification tasks using standard network topologies.
Loading