Keywords: Tensor Optimization; Sparse Tensor Principal Component Analysis; Generalized Power Method
Abstract: Sparse tensor principal component analysis (STPCA) seeks interpretable low-dimensional representations of high-order data by enforcing sparsity across tensor modes.
However, the resulting optimization is highly nonconvex and computationally demanding, particularly in high-dimensional and unbalanced settings.
We introduce GP-STPCA, a unified framework that reformulates STPCA into structured sparse PCA subproblems solvable via the generalized power method.
Our approach accommodates both $\ell_{0}$- and $\ell_{1}$-penalties, in single-unit and block formulations, enabling efficient extraction of multiple sparse components.
We provide theoretical guarantees by proving equivalence with the original sparse objective and analyzing convergence.
Algorithmically, GP-STPCA further leverages efficient pattern-finding and post-processing to shrink the search space in column-dominant settings.
Extensive experiments on synthetic recovery tasks, ImageNet reconstruction, and brain connectome analysis demonstrate that GP-STPCA consistently outperforms the SOTA sparseGeoHOPCA in terms of accuracy, sparsity control, interpretability, and computational efficiency.
Supplementary Material: zip
Primary Area: optimization
Submission Number: 898
Loading