Keywords: Cyclic learning rate, Efficient benchmarking, Tradeoff curve, Pareto frontier
TL;DR: Cyclic learning rate schedules can be used to benchmark the tradeoff between model accuracy and training time in a single training run
Abstract: Benchmarking the tradeoff between neural network accuracy and training time is computationally expensive. Here we show how a multiplicative cyclic learning rate schedule can be used to construct a tradeoff curve in a single training run. We generate cyclic tradeoff curves for combinations of training methods such as Blurpool, Channels Last, Label Smoothing and MixUp, and highlight how these cyclic tradeoff curves can be used to efficiently evaluate the effects of algorithmic choices on network training.