ByteScale: Efficient Scaling of LLM Training with a 2048K Context Length on More Than 12,000 GPUs.

Hao Ge, Junda Feng, Qi Huang, Fangcheng Fu, Xiaonan Nie, Lei Zuo 0004, Haibin Lin, Bin Cui 0001, Xin Liu

15 Jan 2026CoRR 2025EveryoneCC BY-SA 4.0
Loading