OSCAR: Orthogonalized Sequential Component Analysis for Tensor-on-Tensor Regression

09 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Tensor-on-Tensor Regression, Tensor Decomposition, Component Analysis, Riemannian Gradient Descent(RGD)
Abstract: Tensor-on-tensor (TOT) regression is a critical task in many fields. However, its application is severely hindered by the curse of dimensionality arising from the exponential growth of parameters in the coefficient tensor. Existing methods primarily fall into two categories: low-rank approximations, which often have limited predictive accuracy and interpretability, and sequential component extraction methods that rely on data-space deflation. This deflation mechanism suffers from greedy sub-optimal solutions, error propagation, and a lack of component orthogonality, hindering feature disentanglement. To address these limitations, we propose $\textbf{O}$rthogonalized $\textbf{S}$equential $\textbf{C}$omponent $\textbf{A}$nalysis for Tensor-on-Tensor $\textbf{R}$egression ($\textbf{OSCAR}$). First, we design an Input-Mode Orthogonal Block Term ($\textbf{IMOBT}$) low-rank structure for the coefficient tensor, which inherently enables the supervised extraction of orthogonal components. Building on this, we develop a Sequential Riemannian Optimization ($\textbf{SRO}$) framework that replaces classical data-space deflation with explicit geometric constraints in the parameter space. This is achieved through a Subspace Constrainted Riemannian Gradient Descent algorithm on a Stiefel manifold to rigorously enforce orthogonality. Furthermore, to alleviate the greedy bias of sequential learning, we introduce a novel collaborative refinement mechanism that re-optimizes the synergy among all components whenever a new one is added, enabling an iterative look-back for a superior global solution. Extensive experiments on synthetic and real-world datasets demonstrate that our proposed OSCAR framework not only achieves competitive predictive performance but also shows significant advantages in supervised component extraction and feature disentanglement.
Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 3429
Loading