## Dynamic Tensor Rematerialization

28 Sept 2020, 15:49 (modified: 21 Feb 2022, 13:38)ICLR 2021 SpotlightReaders: Everyone
Keywords: Rematerialization, Memory-saving, Runtime Systems, Checkpointing
Abstract: Checkpointing enables the training of deep learning models under restricted memory budgets by freeing intermediate activations from memory and recomputing them on demand. Current checkpointing techniques statically plan these recomputations offline and assume static computation graphs. We demonstrate that a simple online algorithm can achieve comparable performance by introducing Dynamic Tensor Rematerialization (DTR), a greedy online algorithm for checkpointing that is extensible and general, is parameterized by eviction policy, and supports dynamic models. We prove that DTR can train an $N$-layer linear feedforward network on an $\Omega(\sqrt{N})$ memory budget with only $\mathcal{O}(N)$ tensor operations. DTR closely matches the performance of optimal static checkpointing in simulated experiments. We incorporate a DTR prototype into PyTorch merely by interposing on tensor allocations and operator calls and collecting lightweight metadata on tensors.
One-sentence Summary: We present an online algorithm for rematerialization (recomputing intermediate activations during backpropagation instead of storing them), which enables training under low memory, finding that it is competitive with offline techniques.
Supplementary Material: zip
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Code: [![github](/images/github_icon.svg) uwsampl/dtr-prototype](https://github.com/uwsampl/dtr-prototype)
11 Replies