Fine-grained Generalization Analysis of Inductive Matrix CompletionDownload PDF

21 May 2021, 20:50 (edited 23 Jan 2022)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: Inductive Matrix Completion, Statistical Learning Theory, Nuclear Norm Regularisation
  • TL;DR: We prove distribution-free bounds for inductive matrix completion with rate $\widetilde{O}(d^{3/2}\sqrt{r})$ and provide an inductive analogue of the weighted trace norm which brings the rate down to $\widetilde{O}(rd)$.
  • Abstract: In this paper, we bridge the gap between the state-of-the-art theoretical results for matrix completion with the nuclear norm and their equivalent in \textit{inductive matrix completion}: (1) In the distribution-free setting, we prove bounds improving the previously best scaling of $O(rd^2)$ to $\widetilde{O}(d^{3/2}\sqrt{r})$, where $d$ is the dimension of the side information and $r$ is the rank. (2) We introduce the (smoothed) \textit{adjusted trace-norm minimization} strategy, an inductive analogue of the weighted trace norm, for which we show guarantees of the order $\widetilde{O}(dr)$ under arbitrary sampling. In the inductive case, a similar rate was previously achieved only under uniform sampling and for exact recovery. Both our results align with the state of the art in the particular case of standard (non-inductive) matrix completion, where they are known to be tight up to log terms. Experiments further confirm that our strategy outperforms standard inductive matrix completion on various synthetic datasets and real problems, justifying its place as an important tool in the arsenal of methods for matrix completion using side information.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
16 Replies

Loading