Abstract: Low-rank tensor recovery (LRTR) is a natural extension of low-rank matrix recovery (LRMR) to high-dimensional arrays, which aims to reconstruct an underlying tensor <inline-formula><tex-math notation="LaTeX">$\boldsymbol{\mathcal {X}}$</tex-math></inline-formula> from incomplete linear measurements <inline-formula><tex-math notation="LaTeX">$\mathfrak {M}(\boldsymbol{\mathcal {X}})$</tex-math></inline-formula> . However, LRTR ignores the error caused by quantization, limiting its application when the quantization is low-level. In this work, we take into account the impact of extreme quantization and suppose the quantizer degrades into a comparator that only acquires the signs of <inline-formula><tex-math notation="LaTeX">$\mathfrak {M}(\boldsymbol{\mathcal {X}})$</tex-math></inline-formula> . We still hope to recover <inline-formula><tex-math notation="LaTeX">$\boldsymbol{\mathcal {X}}$</tex-math></inline-formula> from these binary measurements. Under the tensor Singular Value Decomposition (t-SVD) framework, two recovery methods are proposed—the first is a tensor hard singular tube thresholding method; the second is a constrained tensor nuclear norm minimization method. These methods can recover a real <inline-formula><tex-math notation="LaTeX">$n_1\times n_2\times n_3$</tex-math></inline-formula> tensor <inline-formula><tex-math notation="LaTeX">$\boldsymbol{\mathcal {X}}$</tex-math></inline-formula> with tubal rank <inline-formula><tex-math notation="LaTeX">$r$</tex-math></inline-formula> from <inline-formula><tex-math notation="LaTeX">$m$</tex-math></inline-formula> random Gaussian binary measurements with errors decaying at a polynomial speed of the oversampling factor <inline-formula><tex-math notation="LaTeX">$\lambda :=m/((n_1+n_2)n_3r)$</tex-math></inline-formula> . To improve the convergence rate, we develop a new quantization scheme under which the convergence rate can be accelerated to an exponential function of <inline-formula><tex-math notation="LaTeX">$\lambda$</tex-math></inline-formula> . Numerical experiments verify our results, and the applications to real-world data demonstrate the promising performance of the proposed methods.
0 Replies
Loading