Keywords: block coordinate descent, large language models
TL;DR: This work designs a memory efficient block coordinate descent optimization method for finetuning LLMs, which effectively finetunes Llama 3-8B and Llama 3-70B using a single RTX3090-24GB GPU and 4 A100-80GB GPUs, respectively.
Abstract: This work presents BAdam, an optimization method that leverages the block coordinate descent (BCD) framework with Adam's update rule. BAdam offers a memory efficient approach to the full parameter finetuning of large language models. We conduct
a theoretical convergence analysis for BAdam in the deterministic case. Experimentally, we apply BAdam to finetune the Llama 3-8B and Llama 3-70B models using a single RTX3090-24GB GPU and 4 A100-80GB GPUs, respectively. The results confirm BAdam's efficiency in terms of memory usage, running time, and optimization capability. Furthermore, the downstream performance evaluation based on MT-bench and math benchmarks shows that BAdam outperforms existing memory efficient baselines such as LoRA. It also demonstrates that BAdam can achieve comparable or even superior performance compared to Adam. Finally, the ablation study using SGD's update rule illustrates the suitability of BCD for finetuning LLMs. Our code can be easily integrated into any PyTorch-based codebase and is available at https://github.com/Ledzy/BAdam.
Primary Area: Optimization (convex and non-convex, discrete, stochastic, robust)
Submission Number: 5025
Loading