Robust Max Entrywise Error Bounds for Tensor Estimation From Sparse Observations via Similarity-Based Collaborative FilteringDownload PDFOpen Website

Published: 01 Jan 2023, Last Modified: 17 May 2023IEEE Trans. Inf. Theory 2023Readers: Everyone
Abstract: Consider the task of estimating a 3-order <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$n \times n \times n$ </tex-math></inline-formula> tensor from noisy observations of randomly chosen entries in the sparse regime. We introduce a similarity based collaborative filtering algorithm for estimating a tensor from sparse observations and argue that it achieves sample complexity that nearly matches the conjectured computationally efficient lower bound on the sample complexity for the setting of low-rank tensors. Our algorithm uses the matrix obtained from the flattened tensor to compute similarity, and estimates the tensor entries using a nearest neighbor estimator. We prove that the algorithm recovers a finite rank tensor with maximum entry-wise error (MEE) and mean-squared-error (MSE) decaying to 0 as long as each entry is observed independently with probability <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$p = \Omega (n^{-3/2 + \kappa })$ </tex-math></inline-formula> for any arbitrarily small <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$ \kappa &gt; 0$ </tex-math></inline-formula> . More generally, we establish robustness of the estimator, showing that when arbitrary noise bounded by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$ \boldsymbol { \varepsilon }\geq 0$ </tex-math></inline-formula> is added to each observation, the estimation error with respect to MEE and MSE degrades by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${\sf poly}(\boldsymbol { \varepsilon })$ </tex-math></inline-formula> . Consequently, even if the tensor may not have finite rank but can be approximated within <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$ \boldsymbol { \varepsilon }\geq 0$ </tex-math></inline-formula> by a finite rank tensor, then the estimation error converges to <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${\sf poly}(\boldsymbol { \varepsilon })$ </tex-math></inline-formula> . Our analysis sheds insight into the conjectured sample complexity lower bound, showing that it matches the connectivity threshold of the graph used by our algorithm for estimating similarity between coordinates.
0 Replies

Loading