CondConv: Conditionally Parameterized Convolutions for Efficient InferenceDownload PDF

Brandon Yang, Gabriel Bender, Quoc Le, Jiquan Ngiam

06 Sept 2019 (modified: 05 May 2023)NeurIPS 2019Readers: Everyone
Abstract: Convolutional layers are one of the basic building blocks of modern deep neural networks. One fundamental assumption is that convolutional kernels should be shared for all examples in a dataset. We propose conditionally parameterized convolutions (CondConv), which learn specialized convolutional kernels for each example. Replacing normal convolutions with CondConv enables us to increase the size and capacity of a network, while maintaining efficient inference. We evaluate CondConv on MobileNetV1, MobileNetV2, and ResNet-50, and show improvements on all three architectures. On ImageNet classification, CondConv improves the top-1 validation accuracy of the MobileNetV1 architecture from 71.9% to 75.8% while only increasing inference cost by 22%. On COCO object detection, CondConv improves the minival mAP of the MobileNetV1 SSD300 model from 20.3 to 22.4 with just a 4% increase in inference cost.
Code Link: https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet/condconv
CMT Num: 767
0 Replies

Loading