CR-Net: Scaling Parameter-Efficient Training with Cross-Layer Low-Rank Structure

Published: 26 Jan 2026, Last Modified: 26 Feb 2026ICLR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Parameter-efficient, LLMs pre-training, cross-layer low-rank, low-rank pre-training.
TL;DR: We propose a low-rank framework for LLMs pre-training named CR-Net which leveraging cross-layer activation residuals to enhance model efficiency while maintaining performance, reducing computational/memory costs.
Abstract: Low-rank architectures have become increasingly important for efficient large language model (LLM) pre-training, providing substantial reductions in both parameter complexity and memory/computational demands. Despite these advantages, current low-rank methods face three critical shortcomings: (1) compromised model performance, (2) considerable computational overhead, and (3) limited activation memory savings. To address these limitations, we propose **C**ross-layer Low-**R**ank residual **Net**work (**CR-Net**), an innovative parameter-efficient framework inspired by our discovery that inter-layer activation residuals possess low-rank properties. CR-Net implements this insight through a dual-path architecture that efficiently reconstructs layer activations by combining previous-layer outputs with their low-rank differences, thereby maintaining high-rank information with minimal parameters. We further develop a specialized activation recomputation strategy tailored for CR-Net that dramatically reduces memory requirements. Extensive pre-training experiments across model scales from 60M to 7B parameters demonstrate that CR-Net consistently outperforms state-of-the-art low-rank frameworks while requiring fewer computational resources and less memory.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 2550
Loading