Slimmable Neural NetworksDownload PDF

Published: 21 Dec 2018, Last Modified: 29 Sept 2024ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: We present a simple and general method to train a single neural network executable at different widths (number of channels in a layer), permitting instant and adaptive accuracy-efficiency trade-offs at runtime. Instead of training individual networks with different width configurations, we train a shared network with switchable batch normalization. At runtime, the network can adjust its width on the fly according to on-device benchmarks and resource constraints, rather than downloading and offloading different models. Our trained networks, named slimmable neural networks, achieve similar (and in many cases better) ImageNet classification accuracy than individually trained models of MobileNet v1, MobileNet v2, ShuffleNet and ResNet-50 at different widths respectively. We also demonstrate better performance of slimmable models compared with individual ones across a wide range of applications including COCO bounding-box object detection, instance segmentation and person keypoint detection without tuning hyper-parameters. Lastly we visualize and discuss the learned features of slimmable networks. Code and models are available at: https://github.com/JiahuiYu/slimmable_networks
Keywords: Slimmable neural networks, mobile deep learning, accuracy-efficiency trade-offs
TL;DR: We present a simple and general method to train a single neural network executable at different widths (number of channels in a layer), permitting instant and adaptive accuracy-efficiency trade-offs at runtime.
Code: [![github](/images/github_icon.svg) JiahuiYu/slimmable_networks](https://github.com/JiahuiYu/slimmable_networks) + [![Papers with Code](/images/pwc_icon.svg) 2 community implementations](https://paperswithcode.com/paper/?openreview=H1gMCsAqY7)
Data: [ImageNet](https://paperswithcode.com/dataset/imagenet)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/slimmable-neural-networks/code)
20 Replies

Loading