Online Tensor Max-Norm Regularization via Stochastic Optimization

Published: 31 May 2024, Last Modified: 31 May 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: The advent of ubiquitous multidimensional arrays poses unique challenges for low-rank modeling of tensor data due to higher-order relationships, gross noise, and large dimensions of the tensor. In this paper, we consider online low-rank estimation of tensor data where the multidimensional data are revealed sequentially. Induced by the recently proposed tensor-tensor product (t-product), we rigorously deduce the tensor max-norm and formulate the tensor max-norm into an equivalent tensor factorization form, where the factors consist of a tensor basis component and a coefficient one. With this formulation, we develop an online max-norm regularized tensor decomposition (OMRTD) method by alternatively optimizing over the basis component and the coefficient tensor. The algorithm is scalable to the large-scale setting and the sequence of the solutions produced by OMRTD converges to a stationary point of the expected loss function asymptotically. Further, we extend OMRTD for tensor completion. Numerical experiments demonstrate encouraging results for the effectiveness and robustness of our algorithm. The code is available at https://github.com/twugithub/2024-TMLR-OMRTD.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: 1. Modify the statement regarding the work in Srebro & Shraibman (2005) in the second paragraph of Page 5. 2. Provide more details for the proof of Theorem 1 and add some proofs in the appendix. 3. Thoroughly proofread the paper and make necessary small changes.
Code: https://github.com/twugithub/2024-TMLR-OMRTD
Assigned Action Editor: ~Stephen_Becker1
Submission Number: 2152
Loading