MixSize: Training Convnets With Mixed Image Sizes for Improved Accuracy, Speed and Scale ResiliencyDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Blind SubmissionReaders: Everyone
Abstract: Convolutional neural networks (CNNs) are commonly trained using a fixed spatial image size predetermined for a given model. Although trained on images of a specific size, it is well established that CNNs can be used to evaluate a wide range of image sizes at test time, by adjusting the size of intermediate feature maps. \\ In this work, we describe and evaluate a novel mixed-size training regime that mixes several image sizes at training time. We demonstrate that models trained using our method are more resilient to image size changes and generalize well even on small images. This allows faster inference by using smaller images at test time. For instance, we receive a $76.43\%$ top-1 accuracy using ResNet50 with an image size of $160$, which matches the accuracy of the baseline model with $2 \times$ fewer computations. Furthermore, for a given image size used at test time, we show this method can be exploited either to accelerate training or the final test accuracy. For example, we are able to reach a $79.27\%$ accuracy with a model evaluated at a $288$ spatial size for a relative improvement of $14\%$ over the baseline. Our PyTorch implementation and pre-trained models are publicly available\footnote{\url{https://github.com/paper-submissions/mix-match}
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Reviewed Version (pdf): https://openreview.net/references/pdf?id=Ga7K621-Vj
11 Replies

Loading