Keywords: Bias–variance tradeoff, tensor denoising, Tucker decomposition, HOSVD, rank-adaptive estimation, truncated SVD, 3D MRI experiments.
TL;DR: Rank-adaptive HOSVD gives explicit bias–variance bounds for tensor denoising without exact low-rankness; includes a unified matrix SVD results.
Abstract: We study denoising of a third-order tensor when the ground-truth tensor is **not** necessarily Tucker low-rank. Specifically, we observe
$$
Y=X^\\ast+Z\in \\mathbb{R}^{p_{1} \\times p_{2} \\times p_{3}},
$$
where $X^\\ast$ is the ground-truth tensor, and $Z$ is the noise tensor. We propose a simple variant of the higher-order tensor SVD estimator $\\widetilde{X}$. We show that uniformly over all user-specified Tucker ranks $(r_{1},r_{2},r_{3})$,
$$
\\| \\widetilde{X} - X^\ast \\|^2_{\\mathrm{F}} = O \\Big( \\kappa^2 \\Big\\{ r_{1}r_{2}r_{3} + \\sum_{k=1}^{3} p_{k} r_{k} \\Big\\} \\; + \\; \\xi_{(r_{1},r_{2},r_{3})}^2 \\Big) \\quad \\text{ with high probability.}
$$
Here, the bias term $\xi_{(r_1,r_2,r_3)}$ corresponds to the best achievable approximation error of $X^\ast$ over the class of tensors with Tucker ranks $(r_1,r_2,r_3)$; $\kappa^2$ quantifies the noise level; and the variance term $\kappa^2 \\{r_{1}r_{2}r_{3}+\sum_{k=1}^{3} p_{k} r_{k}\\}$ scales with the effective number of free parameters in the estimator $\widetilde{X}$. Our analysis achieves a clean rank-adaptive bias-variance tradeoff: as we increase the ranks of estimator $\widetilde{X}$, the bias $\xi(r_{1},r_{2},r_{3})$ decreases and the variance increases. As a byproduct we also obtain a convenient bias-variance decomposition for the vanilla low-rank SVD matrix estimators.
Supplementary Material: pdf
Primary Area: learning theory
Submission Number: 7031
Loading