Learning Identity Mappings with Residual GatesDownload PDF

28 Mar 2024 (modified: 21 Jul 2022)Submitted to ICLR 2017Readers: Everyone
Abstract: We propose a layer augmentation technique that adds shortcut connections with a linear gating mechanism, and can be applied to almost any network model. By using a scalar parameter to control each gate, we provide a way to learn identity mappings by optimizing only one parameter. We build upon the motivation behind Highway Neural Networks and Residual Networks, where a layer is reformulated in order to make learning identity mappings less problematic to the optimizer. The augmentation introduces only one extra parameter per layer, and provides easier optimization by making degeneration into identity mappings simpler. Experimental results show that augmenting layers provides better optimization, increased performance, and more layer independence. We evaluate our method on MNIST using fully-connected networks, showing empirical indications that our augmentation facilitates the optimization of deep models, and that it provides high tolerance to full layer removal: the model retains over 90% of its performance even after half of its layers have been randomly removed. In our experiments, augmented plain networks -- which can be interpreted as simplified Highway Neural Networks -- outperform ResNets, raising new questions on how shortcut connections should be designed. We also evaluate our model on CIFAR-10 and CIFAR-100 using augmented Wide ResNets, achieving 3.65% and 18.27% test error, respectively.
TL;DR: This paper proposes adding simple gates to layers to make learning identity mappings trivial. It also introduces Gated Plain Networks and Gated Residual Networks.
Conflicts: land.ufrj.br, cos.ufrj.br
Keywords: Computer vision, Deep learning, Optimization
20 Replies

Loading