Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Skip-graph: Learning graph embeddings with an encoder-decoder model
John Boaz Lee, Xiangnan Kong
Nov 04, 2016 (modified: Jan 11, 2017)ICLR 2017 conference submissionreaders: everyone
Abstract:In this work, we study the problem of feature representation learning for graph-structured data. Many of the existing work in the area are task-specific and based on supervised techniques. We study a method for obtaining a generic feature representation for a graph using an unsupervised approach. The neural encoder-decoder model is a method that has been used in the natural language processing domain to learn feature representations of sentences. In our proposed approach, we train the encoder-decoder model to predict the random walk sequence of neighboring regions in a graph given a random walk along a particular region. The goal is to map subgraphs — as represented by their random walks — that are structurally and functionally similar to nearby locations in feature space. We evaluate the learned graph vectors using several real-world datasets on the graph classification task. The proposed model is able to achieve good results against state-of- the-art techniques.
TL;DR:An unsupervised method for generating graph feature representations based on the encoder-decoder model.
Keywords:Unsupervised Learning, Deep learning
Enter your feedback below and we'll get back to you as soon as possible.