EinSort: Sorting is All We Need for Tensorizing LLM

08 May 2026 (modified: 11 May 2026)ICML 2026 Workshop CoLoRAI SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Tensor decomposition, LLM, KV cache compression, tensor network
TL;DR: EinSort adaptively tensorizes large models by optimizing index orderings and network structure to uncover low-rank representations, enabling more accurate compression and reduced KV cache memory.
Abstract: Tensor networks provide efficient representations for compressing large neural networks. By carefully designing shapes and topologies, they can significantly reduce memory and computational costs. However, identifying implicit low-rank structures in large foundation models remains challenging due to their enormous scale and un-structured weight distributions. We propose an adaptive tensorization method that discovers inherent low-rank structure in a target tensor by index ordering. Experiments on weight and KV-cache compression demonstrate improved reconstruction quality compared to baselines.
Submission Number: 116
Loading