## Fast Graph Learning with Unique Optimal Solutions

Mar 08, 2021 (edited Apr 22, 2021)GTRL 2021 PosterReaders: Everyone
• Keywords: Graph, Learning, Fast, SVD
• TL;DR: We use SVD to learn Graph Models, on dense matrices, without computing the dense matrices, showing competitive performance but learning much faster.
• Abstract: We consider two popular Graph Representation Learning (GRL) methods: message passing for node classification and network embedding for link prediction. For each, we pick a popular model that we: (i) *linearize* and (ii) and switch its training objective to *Frobenius norm error minimization*. These simplifications can cast the training into finding the optimal parameters in closed-form. We program in TensorFlow a functional form of Truncated Singular Value Decomposition (SVD), such that, we could decompose a dense matrix $\mathbf{M}$, without explicitly computing $\mathbf{M}$. We achieve competitive performance on popular GRL tasks while providing orders of magnitude speedup. We open-source our code at http://github.com/samihaija/tf-fsvd