Abstract: Recently, network embedding that encodes structural information of graphs into a vector space has become popular for network analysis. Although recent methods show promising performance for various applications, the huge size of graphs may hinder a direct application of the existing network embedding method to them. This paper presents NECL, a novel efficient Network Embedding method as answers to the following two questions: 1) Is there an ideal network Compression designed specifically for embedding? 2) Does the network compression significantly boost the network representation Learning? For the first problem, we propose a neighborhood similarity based graph compression method that compresses the input graph to a smaller graph without losing any/much information about its global structure and the local proximity of its vertices. For the second problem, we employ the compressed graph for network embedding instead of the original large graph to bring down the embedding cost. NECL is a general meta-strategy to improve the efficiency of all of the state-of-the-art graph embedding algorithms based on random walks, including DeepWalk and Node2vec, without losing their effectiveness. Extensive experiments validate the efficiency of NECL method that yields an average improvement of 23 -57% embedding time, including walking and learning time, without decreasing classification accuracy as evaluated on single and multi-label classification tasks on large real-world graphs.
0 Replies
Loading