TL;DR: We identify and address gaps in early-exiting literature by analyzing the weaknesses of commonly used multi-exit model training strategies.
Abstract: Early exits enable the network's forward pass to terminate early by attaching trainable internal classifiers to the backbone network. Existing early-exit methods typically adopt either a joint training approach, where the backbone and exit heads are trained simultaneously, or a disjoint approach, where the heads are trained separately. However, the implications of this choice are often overlooked, with studies typically adopting one approach without adequate justification. This choice influences training dynamics and its impact remains largely unexplored. In this paper, we introduce a set of metrics to analyze early-exit training dynamics and guide the choice of training strategy. We demonstrate that conventionally used joint and disjoint regimes yield suboptimal performance. To address these limitations, we propose a mixed training strategy: the backbone is trained first, followed by the training of the entire multi-exit network. Through comprehensive evaluations of training strategies across various architectures, datasets, and early-exit methods we present strengths and weaknesses of the early exit training strategies. In particular, we show consistent improvements in performance and efficiency using the proposed mixed strategy.
Lay Summary: Modern deep learning models process every input through all layers, even when some examples are easy to classify, wasting computation and energy. A popular fix is to insert small “checkpoints” (early exits) at intermediate layers so the model can stop inference early. Past work either trains the whole network and checkpoints together or trains them separately, with little guidance on which works best. We introduce simple metrics to track how different training strategies affect learning, and find that the common “joint” and “disjoint” approaches each leave accuracy or efficiency on the table. We then propose a mixed strategy: first train the core network on its own, and only afterward tune all the early-exit classifiers together. Across multiple architectures, datasets, and checkpoint designs, this two-stage method consistently improves both prediction accuracy and inference speed. Our results give clear rules for practitioners to choose and train multi-exit models, paving the way for faster, greener AI systems.
Link To Code: https://github.com/kamadforge/early-exit-benchmark
Primary Area: Deep Learning->Everything Else
Keywords: early exiting, multi-exit models, conditional computation, dynamic neural networks, efficient inference
Submission Number: 4627
Loading