Abstract: Deep residual networks (ResNets) made a recent breakthrough in deep learning. The core idea of ResNets is to have shortcut connections between layers that allow the network to be much deeper while still being easy to optimize avoiding vanishing gradients. These shortcut connections have interesting properties that make ResNets behave differently from other typical network architectures. In this work we use these properties to design a network based on a ResNet but with parameter sharing and with adaptive computation time. The resulting network is much smaller than the original network and can adapt the computational cost to the complexity of the input image.
TL;DR: A ResNet inspired network with a small memory footprint that can adapt the computational cost to the complexity of the input.
Keywords: efficient neural network, resource constrained, mobile, weight sharing, adaptive computation time
3 Replies
Loading