Lowering PyTorch's Memory Consumption for Selective Differentiation

Published: 18 Jun 2024, Last Modified: 09 Jul 2024WANT@ICML 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Selective automatic differentiation, fine-tuning, Backpropagation, memory efficiency
TL;DR: We describe an overlooked opportunity to save memory in PyTorch's autodiff whenever gradients for a subset of tensors are requested.
Abstract: Memory is a limiting resource for many deep learning tasks. Beside the neural network weights, one main memory consumer is the computation graph built up by automatic differentiation (AD) for backpropagation. We observe that PyTorch's current AD implementation neglects information about parameter differentiability when storing the computation graph. This information is useful though to reduce memory whenever gradients are requested for a parameter subset, as is the case in many modern fine-tuning tasks. Specifically, inputs to layers that act linearly in their parameters and inputs (dense, convolution, or normalization layers in evaluation mode) can be discarded whenever the parameters are marked as non-differentiable. We provide a drop-in, differentiability-agnostic implementation of such layers and demonstrate its ability to reduce memory without affecting run time on popular convolution- and attention-based architectures.
Submission Number: 34
Loading