Matrix Data Deep Decoder - Geometric Learning for Structured Data CompletionDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Blind SubmissionReaders: Everyone
Keywords: Deep learning, Non-Euclidean data completion, Sparse matrices, Recommender systems, Recommendation systems, Sparse representations
Abstract: In this work, we present a fully convolutional end to end method to reconstruct corrupted sparse matrices of Non-Euclidean data. The classic example for such matrices is recommender systems matrices where the rows/columns represent items/users and the entries are ratings. The method we present is inspired by the surprising and spectacular success of methods like$"$ deep image prior$"$ and $``$deep decoder$"$ for corrupted image completion. In sharp contrast to previous Matrix Completion methods wherein the latent matrix or its factors directly serve as the optimization variable, in the method we present, the matrix is parameterized as the weights of a graph neural network acting on a random noisy input. Then we are tuning the network parameters to get a result as close as possible to the initial sparse matrix (using its factors) getting that way state of the art matrix completion result. In addition to the conceptual simplicity of our method, which is just Non-Euclidean generalization of deep image priors, it holds fewer parameters than previously presented methods which makes the parameters more trackable and the method more computationally efficient and more applicable for the real-world tasks. The method also achieves state-of-the-art results for the matrix completion task on the classical benchmarks in the field. The method also surprisingly shows that untrained convolutional neural network can use a good prior not only for image completion but also for Matrix Completion when redefined for graphs.
One-sentence Summary: Non-Euclidean Data Matrix Completion with end-to-end fully convolutional graph neural network based on Deep Image Prior Generalization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Reviewed Version (pdf): https://openreview.net/references/pdf?id=cXMF1NKpXx
5 Replies

Loading