OmniBal: Towards Fast Instruction-Tuning for Vision-Language Models via Omniverse Computation Balance

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: In this work, we effectively addressed the issue of imbalanced computation loads in large-scale 3D parallel training of vision-language models by rebalancing across data, model, and memory dimensions.
Abstract: Vision-language instruction-tuning models have recently achieved significant performance improvements. In this work, we discover that large-scale 3D parallel training on those models leads to an imbalanced computation load across different devices. The vision and language parts are inherently heterogeneous: their data distribution and model architecture differ significantly, which affects distributed training efficiency. To address this issue, we rebalance the computational load from data, model, and memory perspectives, achieving more balanced computation across devices. Specifically, for the data, instances are grouped into new balanced mini-batches within and across devices. A search-based method is employed for the model to achieve a more balanced partitioning. For memory optimization, we adaptively adjust the re-computation strategy for each partition to utilize the available memory fully. These three perspectives are not independent but are closely connected, forming an omniverse balanced training framework. Extensive experiments are conducted to validate the effectiveness of our method. Compared with the open-source training code of InternVL-Chat, training time is reduced greatly, achieving about 1.8$\times$ speed-up. Our method's efficacy and generalizability are further validated across various models and datasets. Codes will be released at https://github.com/ModelTC/OmniBal.
Lay Summary: Vision-language models, which understand both images and text, are becoming more powerful—but training them is slow and inefficient on large computer clusters. We found that this happens because the image and text parts of the model are very different, leading to an uneven workload across devices. To fix this, we created OmniBal, a new training method that balances the work more fairly. It does this in three ways: by grouping training data more evenly, splitting the model into better-balanced parts, and managing memory more efficiently during training. These improvements work together to make training faster and more stable. In our tests, OmniBal sped up training by about 1.8× compared to current methods. It also works well on different models and datasets. This research matters because it helps developers train large, multi-modal models more efficiently—saving time, energy, and computing resources.
Link To Code: https://github.com/ModelTC/OmniBal
Primary Area: Deep Learning->Large Language Models
Keywords: VLM, Balance Training
Submission Number: 8352
Loading