Abstract: The majority of classic tensor CP decomposition models are designed for squared loss, utilizing Euclidean distance as a local proximal term. However, the Euclidean distance is unsuitable for the generalized loss function applicable to diverse types of real-world data, such as integer and binary data. Consequently, algorithms developed under the squared loss are not easily adaptable to handle these generalized losses, partially due to the absence of the gradient Lipschitz continuity. This paper explores generalized tensor CP decomposition, employing the Bregman distance as the proximal term and introducing an inertial accelerated block randomized stochastic mirror descent algorithm (iTableSMD). Within a broader multi-block variance reduction and inertial acceleration framework, we demonstrate the sublinear convergence rate for the subsequential sequence produced by the iTableSMD algorithm. We further show that iTableSMD requires at most \(\mathcal {O}(\varepsilon ^{-2})\) iterations in expectation to attain an \(\varepsilon \)-stationary point and establish the global convergence of the sequence. Numerical experiments on real datasets demonstrate that our proposed algorithm is efficient and achieves better performance than the existing state-of-the-art methods.
Loading