Efficient and Accurate Tensor Compression via Recursive Sketching

Published: 03 Feb 2026, Last Modified: 03 Feb 2026AISTATS 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: This paper proposes Recursive TensorSketch based algorithms for CP and TT tensor compression, achieving smaller variance bounds independent of the number of modes and asymptotically faster time complexity than existing baselines.
Abstract: The computation of inner products between high-order tensor data points is a fundamental task in numerous machine learning and scientific applications. However, the naive approach to these computations incurs exponential time complexity with respect to the number of modes. The work of Rakhshan and Rabusseau (AISTAT, 2020) introduced an extension of the random projection tailored for tensor datasets, which compresses large tensors into compact vectors (\textit{a.k.a} sketches). Their approach provides unbiased estimates of the original pairwise inner products. However, the variance of their estimates grows exponentially with the number of modes, making their estimates less reliable for small sketch sizes. In this work, we propose improved sketching algorithms that provide unbiased estimates for pairwise inner products, with significantly lower variance - independent of the number of modes—compared to that of Rakhshan and Rabusseau (AISTAT, 2020). Furthermore, our approach offers asymptotically improved time complexity. Our sketching algorithm builds on the framework of Ahle et al. (SODA 2020), which proposed sketching techniques for high-degree \textit{polynomial kernels}.
Submission Number: 907
Loading