DrNet: Dynamic Reversible Dual-Residual Networks for Memory-Efficient Finetuning

Published: 01 Jan 2024, Last Modified: 15 Apr 2025CVPR 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Large pretrained models are increasingly crucial in modern computer vision tasks. These models are typically used in downstream tasks by end-to-end finetuning, which is highly memory-intensive for tasks with high-resolution data, e.g., video understanding, small object detection, and point cloud analysis. In this paper, we propose Dynamic Reversible Dual-Residual Networks, or Dr2Net, a novel family of network architectures that acts as a surrogate net-work to finetune a pretrained model with substantially re-duced memory consumption. Dr2 Net contains two types of residual connections, one maintaining the residual struc-ture in the pretrained models, and the other making the network reversible. Due to its reversibility, intermediate activations, which can be reconstructed from output, are cleared from memory during training. We use two co-efficients on either type of residual connections respec-tively, and introduce a dynamic training strategy that seam-lessly transitions the pretrained model to a reversible net-work with much higher numerical precision. We evaluate Dr2Net on various pretrained models and various tasks, and show that it can reach comparable performance to con-ventional finetuning but with significantly less memory us-age. Code will be available at https://github.com/coolbay/Dr2Net.
Loading